CN115346002B - Virtual scene construction method and rehabilitation training application thereof - Google Patents

Virtual scene construction method and rehabilitation training application thereof Download PDF

Info

Publication number
CN115346002B
CN115346002B CN202211256882.9A CN202211256882A CN115346002B CN 115346002 B CN115346002 B CN 115346002B CN 202211256882 A CN202211256882 A CN 202211256882A CN 115346002 B CN115346002 B CN 115346002B
Authority
CN
China
Prior art keywords
image
sequence
sub
region
regions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211256882.9A
Other languages
Chinese (zh)
Other versions
CN115346002A (en
Inventor
黄峰
罗子芮
骆志强
尹博
刘瑞
徐硕瑀
谢韶东
陈仰新
方永宁
华夏
陶旭泓
熊丹宇
梁桂林
黎志豪
王安涛
谢航
江焕然
吴梦瑶
李宇彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan University
Original Assignee
Foshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan University filed Critical Foshan University
Priority to CN202211256882.9A priority Critical patent/CN115346002B/en
Publication of CN115346002A publication Critical patent/CN115346002A/en
Application granted granted Critical
Publication of CN115346002B publication Critical patent/CN115346002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The invention belongs to the technical field of virtual reality, and provides a virtual scene construction method and a motion rehabilitation training application thereof, wherein an image sequence is formed by acquiring a plurality of images; marking a mapping domain to construct a region mapping sequence; carrying out region indentation processing on each region mapping sequence so as to obtain an optimized image sequence; and performing three-dimensional reconstruction to obtain a three-dimensional scene model. The visual attention is guided to a comfortable area through image blocks with strong similarity continuity, so that 3D dizzy is prevented, 3D immersion is improved, the overall operation speed is improved, the pause is reduced, and the frame rate is improved; obvious line segment characteristics can be more prominently displayed after restoration, each image for three-dimensional scene model reconstruction has continuity of curve boundaries, the frame rate after reconstruction is kept, the eyesight of a user is guided to a comfortable area through the restored smooth lines as much as possible, and 3D dizziness is avoided as much as possible.

Description

Virtual scene construction method and rehabilitation training application thereof
Technical Field
The invention belongs to the technical field of virtual reality, and particularly relates to a virtual scene construction method and a rehabilitation training application thereof.
Background
In the virtual reality technology, it is important to construct a three-dimensional virtual scene, after the three-dimensional virtual scene is rendered, because the virtual scene contains a large number of real-focus pictures and also contains virtual-focus scene pictures with rich elements, because the pictures are vivid, the neural center believes true, but other motion sensing organs (such as an ear vestibular device) do not sense any motion, at this time, the two functions of the body generate contradiction and have great unpredictability, so the neural center strongly sends out vertigo instructions; the optic nerve must discard these elements in the virtual focus frame because they cause the eye to refocus and frequent focusing failures can easily cause 3D dizziness, especially for patients requiring rehabilitative exercise.
In order to prevent The problem of 3D glare, in The prior art, references (Shibata T, kim J, hoffman D M, et al. The zone of comfort: predicting visual display [ J ]. Journal of Vision, 2011, 11 (8): 11) propose that when The line of sight convergence distance is within a certain range before and after The focus distance, the viewer can still easily obtain 3D information without visual fatigue, and this range is called a comfort zone "for observing 3D stereoscopic Vision. Therefore, in various prior arts using the parallel binocular principle, generally, the 3D foreground after the adjustment and reconstruction is located in a comfort zone of a virtual scene to prevent the generation of 3D dizziness, but the position of the 3D foreground in the virtual scene needs to be manually adjusted when the machine leaves a factory, although 3D dizziness can be effectively reduced, flexibility in the use process at the later stage is greatly limited, and the experience of the 3D immersion feeling of the user is reduced. The three-dimensional shape reconstruction method disclosed as CN111260776A adopts another method in the prior art, and reduces the noise of a real scene to greatly reduce the wrong depth information of the three-dimensional shape reconstruction result of an object to be reconstructed and improve the angle of improving the three-dimensional shape reconstruction precision of a sparse texture detail condition sample so as to indirectly prevent 3D dizziness; however, the method does not avoid a comfortable area of a virtual scene, and needs to reduce noise and improve sparse texture details in an integral manner, so that the integral operation speed is inevitably reduced, and the situations of blockage, low frame rate and the like can occur when a large-scale three-dimensional shape is loaded.
Disclosure of Invention
The invention aims to provide a virtual scene construction method and an exercise rehabilitation training application thereof, which are used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
In order to achieve the above object, according to an aspect of the present invention, there is provided a virtual scene construction method, including:
s100, acquiring a plurality of image forming image sequences;
s200, respectively identifying curves of all images in an image sequence;
s300, performing edge detection on each image in the image sequence to obtain an edge line, and dividing each image in sequence into a plurality of sub-regions by using the edge line;
s400, marking mapping domains for the sub-regions of each image according to curves in all the sub-regions of each image in the image sequence in sequence, and respectively taking sequences formed by the sub-regions with the same mapping domain marks as each region mapping sequence;
s500, carrying out region indentation processing on each region mapping sequence to obtain an optimized image sequence;
s600, performing three-dimensional reconstruction on each image in the image sequence to obtain a three-dimensional scene model.
Further, in S100, the image sequence is a sequence of a plurality of images of the scene object acquired from different viewing angles and positions.
Preferably, in S100, the method for forming an image sequence by acquiring a plurality of images of the scene object from different perspectives and positions is: calibrating the CCD camera by a Zhang calibration method, and acquiring images of different visual angles and positions of a scene object by the calibrated CCD camera; namely, the geometric center of the scene object rotates around the clockwise direction of the scene object by [0 degrees and 360 degrees ], and an image of the scene object is obtained every time the scene object rotates by 1 to 15 degrees; the image sequence is formed from the individual acquired images in turn.
Preferably, in S100, the scene object includes a house, a tree, a person, or furniture.
Further, in S100, the image sequence includes a plurality of photographs of the same scene object.
Further, in S200, the method for detecting the curve feature of each image in the image sequence includes: any one of a K-Segment main curve extraction algorithm, a Douglas-Peucker algorithm and a random Hough transformation algorithm.
Further, in S300, the edge detection algorithm includes any one of a Canny edge detection algorithm, a Sobel edge detection algorithm, a structure trees algorithm, and an HED algorithm.
Further, in S400, the method of sequentially mapping domain markers for the sub-regions of each image according to the curves inside all the sub-regions of each image in the image sequence, and respectively using the sequence composed of the sub-regions with the same mapping domain marker as each region mapping sequence, comprises the following steps:
recording the image sequence as Pic, taking the ith image in Pic as Pic (i), and taking the jth sub-area in Pic (i) as Pic (i, j); wherein i belongs to [1, N1], and N1 is the number of images in Pic; j belongs to [1, N2], and N2 is the number of neutron regions in Pic (i);
in the value range of j, mapping domain marking is carried out on the sub-regions of each image according to the curves in Pic (i, j) in sequence, and the specific method is as follows:
taking a sub-region Pic (i, j) as an anchor region, obtaining the longest curve in the anchor region as an anchor line PTL, and respectively obtaining a sequence formed by the sub-regions except the anchor region in each image in the Pic and all the sub-regions with the largest value of the structural similarity SSIM of the anchor region as a first to-be-selected sequence PS1; recording the kth sub-region in PS1 as PS1 (k), wherein k belongs to [1, N3], and N3 is the number of the sub-regions in PS1; wherein i, j and k are serial numbers;
searching all sub-regions meeting the anchoring mapping condition in the PS1 and marking as mapping domains of the anchoring domain;
the anchoring mapping conditions are as follows:
|PMean(PTL,PS1(k))-GPm(1,k-1)|≥|PiMEAN(PTL,PS1)-PMax(PTL,PS1(k))|;
GPm (1, k-1) represents the average value of the similarity of the longest curve in each of the sub-regions from 1 st to k-1 st in PS1 and the longest curve in PS1 (k); PMean (PTL, PS1 (k)) is the mean of the similarity of each curve in PTL and PS1 (k); piMEAN (PTL, PS 1) is the average of all similarities of the longest curves in each subregion in PTL and PS1; PMax (PTL, PS1 (k)) is the maximum value of the similarity of each curve in PTL and PS1 (k);
further, in order to estimate the time consistency degree after the three-dimensional reconstruction, the anchoring mapping condition further includes:
PrePi(PTL,PS1(k))≥PMean(PTL,PS1(k))+|PiMEAN(PTL,PS1)-PMax(PTL,PS1(k))|;
wherein PrePi (PTL, PS1 (k)) is the predicted reconstruction of the longest curve in PS1 (k) after three-dimensional reconstruction and the anchor line PTL; PMean (PTL, PS1 (k)) is the mean value of the similarity of each curve in PTL and PS1 (k); piMEAN (PTL, PS 1) is the mean of all similarities of the longest curves in each subregion in PTL and PS1; PMax (PTL, PS1 (k)) is the maximum value of the similarity of each curve in PTL and PS1 (k);
the calculation method of the prediction reconstruction degree PrePi (PTL, PS1 (k)) is:
Figure DEST_PATH_IMAGE002
wherein q is a variable, GPS1 (q, k) represents the similarity of the longest curve in the qth sub-region in PS1 and the longest curve in PS1 (k), and GPmax (1, q) represents the maximum value of the similarity of each longest curve in the 1 st to qth sub-regions in PS1 and the longest curve in PS1 (k); GPmin (1, q) represents the minimum value of the similarity of the respective longest curves in the 1 st to q-th subregions in PS1 and the longest curve in PS1 (k);
the sub-region of the mapping domain marked as anchor domain in PS1 means that the sub-region and the anchor domain have the same mapping domain mark; and respectively taking the sequence consisting of the sub-regions with the same mapping domain mark in the Pic as the mapping sequence of each region.
Has the beneficial effects that: because the image blocks in the region mapping sequence are the blocks which have the same mapping domain marks and represent the same images on the similar images, the image blocks with continuity in visual effect can be accurately classified according to the prediction reconstruction degree, the visual attention is guided to a comfortable area through the image blocks with strong similarity continuity, the visual fatigue after reconstruction can be effectively reduced by avoiding the visual unsmooth degree caused by the non-comfortable area of the virtual scene, the 3D dizzy is prevented, the 3D immersion is improved, the overall operation speed can be improved by improving the mapping domain mark classification, the blockage is reduced, and the frame rate is improved.
Further, in S500, the method of performing region indentation processing on each region mapping sequence to obtain an optimized image sequence includes the following steps:
respectively calculating the structural similarity SSIM value of each sub-region in the region mapping sequence and the corresponding anchoring region, and recording the value as CSSIM, and taking the average value of each CSSIM as the SSIM value of the region mapping sequence and recording the value as RSSIM; respectively calculating RSSIMs of all region mapping sequences; taking the average value of RSSIMs of all region mapping sequences as an indentation threshold RT;
screening all region mapping sequences with RSSIM less than RT as sequences to be optimized;
performing region indentation processing on the images in each sequence to be optimized, specifically:
screening all subregions with CSSIM (compact subscriber identity module) less than RSSIM in the sequence to be optimized as defect image subblocks; performing the following operations on each defective image subblock:
recording Pi (u) as the u-th pixel point in the defect image subblock, u belongs to [1, TOT ], wherein TOT is the number of all pixel points in the defect image subblock, and recording all the subregions with CSSIM values of all subregions in the sequence to be optimized being larger than the CSSIM value of the defect image subblock as repair contrast regions; and acquiring pixel points at the same coordinate positions as Pi (u) in each repair contrast area as repair contrast pixels, recording the average value of the pixel values of the repair contrast pixels as Pa, traversing all Pi (u), and replacing the pixel value of Pi (u) with the value of Pa.
The above indentation processing does not consider the streamline indentation of the line segment position during the visual transformation, and in order to more prominently show the obvious line segment characteristics after the restoration, each image for the three-dimensional scene model reconstruction has the continuity of the curve boundary, especially for the users sensitive to the motion sensing organs (such as the ear vestibular), on the premise that the 3D immersion experience of the users is increased, the 3D vertigo caused by the dynamic blurring of the curve boundary is reduced, so that the adaptability of the users is stronger, and the following further schemes need to be added:
preferably, in S500, performing region indentation processing on the images in each sequence to be optimized further includes the following steps:
screening all subregions with CSSIM (compact subscriber identity module) less than RSSIM in the sequence to be optimized as defect image subblocks; performing the following operations on each defective image subblock: all the sub-areas with the CSSIM value of each sub-area recorded in the sequence to be optimized being larger than the CSSIM value of the defective image sub-block are taken as repair contrast areas;
recording the longest curve in the repair contrast area as a repair curve, taking a straight line passing through the end point of the repair curve as XL, recording the longest curve in the defect image subblock as a defect curve, taking a straight line passing through the end point of the repair curve as YL, projecting XL on the defect image subblock and at the same position in the repair contrast area to obtain a projection line ZL, and taking an included angle between YL and ZL as the correction angle Ale1;
calculating the average value of the correction angle Ale1 of each repair contrast area and recording as the LGA to be rotated; acquiring a repair curve with the smallest absolute value of the similarity difference between each repair curve and each defect curve as a selected curve; copying the selected curve to the same position on the defect image sub-block and in the repair contrast area, and rotating the selected curve in the direction from ZL to YL by the angle LGA as a whole; deleting the defect curve;
and recording Li (v) as the v-th pixel point on the selected curve, v belongs to [1, TOT2], and TOT2 is the number of all pixel points on the selected curve, acquiring pixel points at the same coordinate position as Li (v) in each repair contrast area as curve repair pixels, recording the average value of the pixel values of all curve repair pixels as La, traversing all Li (v), and replacing the pixel value of Li (v) with the value of La to finish area retraction processing.
Has the beneficial effects that: some defective lines causing visual discontinuity are repaired, so that the continuity of the lines in the image is stronger, the computational complexity of a computer is not increased, the reconstructed frame rate is kept, the vision of a user is guided to a comfortable area through the repaired smooth lines as far as possible, flaws in the aspects of self-intersection, edge redundancy and the like in the model can be eliminated, and 3D dizziness is avoided as far as possible.
Further, in S400, the similarity calculation method includes: any one of an image hash algorithm and a cosine similarity algorithm.
Further, in S600, the three-dimensional reconstruction method includes any one of a KinectFusion algorithm, an SFM algorithm, an R-MVSNet method, or a PointMVSNet method.
The invention also provides a virtual scene constructing device, which comprises the following components: the processor executes the computer program to implement the steps in the virtual scene construction method, the virtual scene construction apparatus may be run in a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud data center, and an executable system may include, but is not limited to, a processor, a memory, and a server cluster, and the processor executes the computer program to run in units of the following systems:
the image acquisition unit is used for acquiring a plurality of image forming image sequences;
the curve identification unit is used for respectively identifying the curve of each image in the image sequence;
the image segmentation unit is used for carrying out edge detection on each image in the image sequence to obtain an edge line and segmenting each image in sequence into a plurality of sub-regions by using the edge line;
the mapping marking unit is used for marking mapping domains of the sub-regions of each image according to curves in all the sub-regions of each image in the image sequence in sequence, and respectively using sequences formed by the sub-regions with the same mapping domain marks as mapping sequences of each region;
the image optimization unit is used for carrying out region indentation processing on each region mapping sequence so as to obtain an optimized image sequence;
and the three-dimensional reconstruction unit is used for performing three-dimensional reconstruction on each image in the image sequence to obtain a three-dimensional scene model.
The invention also provides a virtual scene construction method applied to rehabilitation training, a user observes a three-dimensional scene model obtained in the virtual scene construction step realized by a computer program by operating a virtual scene construction device, and the virtual scene construction step comprises the following steps:
s100, acquiring a plurality of image forming image sequences;
s200, respectively identifying curves of all images in the image sequence;
s300, performing edge detection on each image in the image sequence to obtain an edge line, and dividing each image in sequence into a plurality of sub-regions by using the edge line;
s400, marking mapping domains for the sub-regions of each image according to curves in all the sub-regions of each image in the image sequence in sequence, and respectively taking sequences formed by the sub-regions with the same mapping domain marks as each region mapping sequence;
s500, carrying out region indentation processing on each region mapping sequence to obtain an optimized image sequence;
s600, performing three-dimensional reconstruction on each image in the image sequence to obtain a three-dimensional scene model.
Preferably, the three-dimensional scene model is used for picture output of a display end used in rehabilitation training.
The invention has the beneficial effects that: the invention provides a virtual scene construction method and a sports rehabilitation training application thereof, which can accurately classify image blocks with continuity in visual effect by predicting reconstruction degree, guide visual attention to a comfortable area by similar image blocks with strong continuity, and effectively reduce visual fatigue after reconstruction by avoiding visual unsmooth degree caused by a non-comfortable area of a virtual scene, thereby preventing 3D dizzy and improving 3D immersion, in addition, improving the integral operation speed by improving mapping domain mark classification, reducing the blockage and improving the frame rate;
the streamline retraction of the line segment position during the vision conversion is considered, in order to more prominently show the obvious line segment characteristics after the restoration, each image for the three-dimensional scene model reconstruction has the continuity of the curve boundary, and particularly for the user during the rehabilitation training with the sensitivity of motion sensing organs (such as an ear vestibular), on the premise that the 3D immersion experience of the user during the rehabilitation training is improved, the 3D dizziness caused by the dynamic blurring of the curve boundary is reduced, and the adaptability of the user is stronger;
some defective lines which cause visual discontinuity are repaired, so that the continuity of the lines in the image is stronger, the computational complexity of a computer is not increased, the reconstructed frame rate is kept, the eyesight of a user is guided to a comfortable area through the repaired smooth lines as much as possible, and 3D dizziness is avoided as much as possible.
Drawings
The above and other features of the invention will be more apparent from the detailed description of the embodiments shown in the accompanying drawings in which like reference characters designate the same or similar elements, and it will be apparent that the drawings in the following description are merely exemplary of the invention and that other drawings may be derived by those skilled in the art without inventive effort, wherein:
FIG. 1 is a flow chart of a method for constructing a virtual scene;
fig. 2 is a system configuration diagram of a virtual scene constructing apparatus.
Detailed Description
The conception, the specific structure and the technical effects of the present invention will be clearly and completely described in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the schemes and the effects of the present invention. It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
Fig. 1 is a flowchart of a virtual scene construction method according to the present invention, and a virtual scene construction method and a motor rehabilitation training application thereof according to an embodiment of the present invention are described below with reference to fig. 1.
The invention provides a virtual scene construction method, which specifically comprises the following steps:
s100, acquiring a plurality of image forming image sequences;
s200, respectively identifying curves of all images in the image sequence;
s300, carrying out edge detection on each image in the image sequence to obtain edge lines, and dividing each image in sequence into a plurality of sub-regions by the edge lines;
s400, marking mapping domains of the sub-regions of each image according to curves in all the sub-regions of each image in the image sequence in sequence, and respectively using sequences formed by the sub-regions with the same mapping domain marks as each region mapping sequence;
s500, carrying out region indentation processing on each region mapping sequence to obtain an optimized image sequence;
s600, performing three-dimensional reconstruction on each image in the image sequence to obtain a three-dimensional scene model.
Further, in S100, the image sequence is a sequence of a plurality of images of the scene object acquired from different perspectives and positions.
Preferably, in S100, the method for forming the image sequence by acquiring a plurality of images of the scene object from different perspectives and positions is: calibrating the CCD camera by a Zhang calibration method, and acquiring images of different visual angles and positions of a scene object by the calibrated CCD camera; namely, clockwise rotation is carried out around the scene object by the geometric center of the scene object at [0 degrees, 360 degrees ], and an image of the scene object is obtained every time the scene object rotates 1 to 15 degrees; an image sequence is formed from the acquired images in sequence.
Preferably, in S100, the scene object includes a house, a tree, a person, or furniture.
Further, in S100, the image sequence includes a plurality of photographs of the same scene object.
Further, in S200, the method for detecting the curve feature of each image in the image sequence includes: any one of a K-Segment main curve extraction algorithm, a Douglas-Peucker algorithm and a random Hough transformation algorithm.
Further, in S300, the edge detection algorithm includes any one of a Canny edge detection algorithm, a Sobel edge detection algorithm, a structure trees algorithm, and an HED algorithm.
Further, in S400, the method for mapping domain marks on the sub-regions of each image sequentially according to the curves inside all the sub-regions of each image in the image sequence, and using the sequence composed of the sub-regions with the same mapping domain mark as each region mapping sequence respectively comprises the following steps:
recording the image sequence as Pic, taking the ith image in Pic as Pic (i), and taking the jth sub-area in Pic (i) as Pic (i, j); wherein i belongs to [1, N1], and N1 is the number of images in Pic; j belongs to [1, N2], and N2 is the number of neutron regions in Pic (i);
in the value range of j, mapping domain marking is carried out on the sub-regions of each image according to the curves in Pic (i, j) in sequence, and the specific method is as follows:
taking a sub-region Pic (i, j) as an anchor region, obtaining the longest curve in the anchor region as an anchor line PTL, and respectively obtaining a sequence formed by the sub-regions except the anchor region in each image in the Pic and all the sub-regions with the largest value of the structural similarity SSIM of the anchor region as a first to-be-selected sequence PS1; recording the kth sub-region in PS1 as PS1 (k), wherein k belongs to [1, N3], and N3 is the number of the sub-regions in PS1; wherein i, j and k are serial numbers;
searching all sub-regions meeting the anchoring mapping condition in the PS1 and marking as mapping domains of the anchoring domain;
the anchoring mapping conditions are as follows:
|PMean(PTL,PS1(k))-GPm(1,k-1)|≥|PiMEAN(PTL,PS1)-PMax(PTL,PS1(k))|;
GPm (1, k-1) represents an average value of the similarity of the respective longest curves in the sub-regions from 1 st to k-1 st in PS1 and the longest curve in PS1 (k); PMean (PTL, PS1 (k)) is the mean value of the similarity of each curve in PTL and PS1 (k); piMEAN (PTL, PS 1) is the mean of all similarities of the longest curves in each subregion in PTL and PS1; PMax (PTL, PS1 (k)) is the maximum value of the similarity of each curve in PTL and PS1 (k);
still further, the anchor mapping condition further includes:
PrePi(PTL,PS1(k))≥PMean(PTL,PS1(k))+|PiMEAN(PTL,PS1)-PMax(PTL,PS1(k))|;
wherein PrePi (PTL, PS1 (k)) is the predicted reconstruction of the longest curve in PS1 (k) after three-dimensional reconstruction and the anchor line PTL; PMean (PTL, PS1 (k)) is the mean of the similarity of each curve in PTL and PS1 (k); piMEAN (PTL, PS 1) is the mean of all similarities of the longest curves in each subregion in PTL and PS1; PMax (PTL, PS1 (k)) is the maximum value of the similarity of each curve in PTL and PS1 (k);
the calculation method of the prediction reconstruction degree PrePi (PTL, PS1 (k)) is:
Figure DEST_PATH_IMAGE004
where q is a variable, GPS1 (q, k) represents the similarity of the longest curve in the qth sub-region in PS1 and the longest curve in PS1 (k), and GPmax (1, q) represents the maximum value of the similarity of each longest curve in the 1 st to qth sub-regions in PS1 and the longest curve in PS1 (k); GPmin (1, q) represents the minimum value of the similarity of the respective longest curves in the 1 st to the qth sub-regions in PS1 and the longest curve in PS1 (k);
the sub-region of the mapping domain marked as anchor domain in PS1 means that the sub-region and the anchor domain have the same mapping domain mark; and respectively taking the sequences consisting of the sub-regions with the same mapping domain mark in the Pic as the mapping sequences of the respective regions.
Has the beneficial effects that: because the image blocks in the region mapping sequence are blocks which have the same mapping domain mark and represent the same type of images on each image with the similarity, the image blocks with continuity in visual effect can be accurately classified according to the prediction reconstruction degree, the visual attention is guided to a comfortable area through the image blocks with strong continuity, the visual fatigue after reconstruction can be effectively reduced by avoiding the visual unsmooth degree caused by the non-comfortable area of the virtual scene, the 3D dizzy is prevented, the 3D immersion feeling is improved, in addition, the integral operation speed can be improved by improving the mapping domain mark classification, the pause is reduced, and the frame rate is improved.
Further, in S500, the method of performing region indentation processing on each region mapping sequence to obtain an optimized image sequence includes the following steps:
respectively calculating the structure similarity SSIM value of each sub-region in the region mapping sequence and the corresponding anchoring region and recording the structure similarity SSIM value as CSSIM, and recording the SSIM value of each CSSIM as RSSIM by taking the average value of each CSSIM as the SSIM value of the region mapping sequence; respectively calculating RSSIMs of all region mapping sequences; taking the average value of RSSIMs of all region mapping sequences as an indentation threshold RT;
screening all region mapping sequences with RSSIM < RT as sequences to be optimized;
performing region indentation processing on the images in each sequence to be optimized, specifically:
screening out all sub-regions of CSSIM < RSSIM in the sequence to be optimized as defect image sub-blocks; performing the following operations on each defective image subblock:
recording Pi (u) as the u-th pixel point in the defect image subblock, u belongs to [1, TOT ], wherein TOT is the number of all pixel points in the defect image subblock, and recording all the subregions with CSSIM values of all subregions in the sequence to be optimized being larger than the CSSIM value of the defect image subblock as repair contrast regions; and acquiring pixel points at the same coordinate position as the Pi (u) in each repair contrast area as repair contrast pixels, recording the average value of the pixel values of the repair contrast pixels as Pa, traversing all the Pi (u), and replacing the pixel value of the Pi (u) with the value of the Pa to finish area retraction processing.
The above indentation processing does not consider the streamline indentation of the line segment position during the visual transformation, and in order to more prominently show the obvious line segment characteristics after the restoration, each image for the three-dimensional scene model reconstruction has the continuity of the curve boundary, especially for the users sensitive to the motion sensing organs (such as the ear vestibular), on the premise that the 3D immersion experience of the users is increased, the 3D vertigo caused by the dynamic blurring of the curve boundary is reduced, so that the adaptability of the users is stronger, and the following further schemes need to be added:
preferably, in S500, performing region indentation processing on the images in each sequence to be optimized further includes the following steps:
screening out all sub-regions of CSSIM < RSSIM in the sequence to be optimized as defect image sub-blocks; performing the following operations on each defective image subblock: recording all subregions with CSSIM values of all subregions in the sequence to be optimized being larger than the CSSIM value of the defective image subblock as repair comparison regions;
recording the longest curve in the repairing contrast area as a repairing curve, taking a straight line passing through the end point of the repairing curve as XL, recording the longest curve in the defect image subblock as a defect curve, taking a straight line passing through the end point of the repairing curve as YL, projecting XL on the same position of the defect image subblock and the repairing contrast area to obtain a projection line ZL, and taking an included angle between the YL and the ZL as the correcting angle Ale1;
calculating the average value of the correction angle Ale1 of each repair contrast area and recording as the LGA to be rotated; acquiring a repair curve with the smallest absolute value of the similarity difference between each repair curve and each defect curve as a selected curve; copying the selected curve to the same position on the defect image sub-block and in the repair contrast area, and rotating the selected curve in the direction from ZL to YL by the angle LGA as a whole; deleting the defect curve;
and recording Li (v) as the v-th pixel point on the selected curve, v belongs to [1, TOT2], and TOT2 is the number of all pixel points on the selected curve, acquiring pixel points at the same coordinate position as Li (v) in each repair contrast area as curve repair pixels, recording the average value of the pixel values of all curve repair pixels as La, traversing all Li (v), and replacing the pixel value of Li (v) with the value of La to finish area retraction processing.
Has the advantages that: some defective lines causing visual discontinuity are repaired, so that the continuity of the lines in the image is stronger, the calculation complexity of a computer is not increased, the reconstructed frame rate is kept, flaws in the aspects of self-intersection, edge redundancy and the like in the model can be eliminated, the eyesight of a user is guided to a comfortable area through the repaired smooth lines as much as possible, and 3D dizziness is avoided as much as possible.
Further, in S400, the similarity calculation method includes: any one of an image hash algorithm and a cosine similarity algorithm.
Further, in S600, the three-dimensional reconstruction method includes any one of a KinectFusion algorithm, an SFM algorithm, an R-MVSNet method, or a PointMVSNet method.
Preferably, the three-dimensional scene model is used for picture output of a display used in rehabilitation training.
As shown in fig. 2, the virtual scene constructing apparatus according to the embodiment of the present invention includes: a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the steps in one of the above embodiments of the virtual scene construction method when executing the computer program, the processor executing the computer program to run in the units of the following system:
the image acquisition unit is used for acquiring a plurality of image forming image sequences;
a curve identification unit for respectively identifying the curves of the images in the image sequence;
the image segmentation unit is used for carrying out edge detection on each image in the image sequence to obtain an edge line and segmenting each image in sequence into a plurality of sub-regions by using the edge line;
the mapping marking unit is used for marking mapping domains for the sub-regions of each image according to curves in all the sub-regions of each image in the image sequence in sequence, and respectively taking sequences formed by the sub-regions with the same mapping domain marks as each region mapping sequence;
the image optimization unit is used for carrying out region indentation processing on each region mapping sequence so as to obtain an optimized image sequence;
and the three-dimensional reconstruction unit is used for performing three-dimensional reconstruction on each image in the image sequence to obtain a three-dimensional scene model.
The virtual scene constructing device provided by the embodiment of the invention comprises: the virtual scene constructing apparatus may be operated in a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud data center, and the like, and the operable system may include, but is not limited to, a processor, a memory, and a computer program stored in the memory and operable on the processor.
The virtual scene construction device can operate in computing equipment such as desktop computers, notebook computers, palm computers and cloud data centers. The virtual scene constructing device comprises, but is not limited to, a processor and a memory. It will be understood by those skilled in the art that the example is only an example of a virtual scene construction method and an athletic rehabilitation training application thereof, and does not constitute a limitation to a virtual scene construction method and an athletic rehabilitation training application thereof, and may include more or less components than a certain ratio, or combine certain components, or different components, for example, the virtual scene construction apparatus may further include an input/output device, a network access device, a bus, and the like.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete component Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor, and the processor is a control center of the virtual scene constructing apparatus, and various interfaces and lines are used to connect the respective sub-areas of the entire virtual scene constructing apparatus.
The memory can be used for storing the computer program and/or the module, and the processor realizes various functions of the virtual scene construction method and the sports rehabilitation training application thereof by running or executing the computer program and/or the module stored in the memory and calling the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The invention also provides a virtual scene construction method applied to rehabilitation training, and a user can observe a computer program by operating a virtual scene construction device to realize a three-dimensional scene model obtained by performing three-dimensional reconstruction according to the steps in the virtual scene construction method.
Although the present invention has been described in considerable detail and with reference to certain illustrated embodiments, it is not intended to be limited to any such details or embodiments or any particular embodiment, so as to effectively encompass the intended scope of the invention. Furthermore, the foregoing describes the invention in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial modifications of the invention, not presently foreseen, may nonetheless represent equivalent modifications thereto.

Claims (7)

1. A virtual scene construction method is characterized by comprising the following steps:
s100, acquiring a plurality of image forming image sequences;
s200, respectively identifying curves of all images in the image sequence;
s300, carrying out edge detection on each image in the image sequence to obtain edge lines, and dividing each image in sequence into a plurality of sub-regions by the edge lines;
s400, marking mapping domains for the sub-regions of each image according to curves in all the sub-regions of each image in the image sequence in sequence, and respectively taking sequences formed by the sub-regions with the same mapping domain marks as each region mapping sequence;
s500, carrying out region indentation processing on each region mapping sequence to obtain an optimized image sequence;
s600, performing three-dimensional reconstruction on each image in the image sequence to obtain a three-dimensional scene model;
in S400, the method for mapping domain marks on the sub-regions of each image sequentially according to the curves in all the sub-regions of each image in the image sequence, and using the sequence formed by the sub-regions with the same mapping domain mark as each region mapping sequence includes the following steps:
taking the ith image in the image sequence Pic as Pic (i), and taking the jth sub-region in the Pic (i) as Pic (i, j);
in the value range of j, mapping domain marking is carried out on the sub-regions of each image according to the curves in Pic (i, j) in sequence, and the specific method is as follows: taking the sub-regions Pic (i, j) as anchor regions, acquiring the longest curve in the anchor regions as an anchor line PTL, and respectively acquiring a sequence consisting of the sub-regions in each image of Pic except the anchor regions and all the sub-regions with the largest values of the structural similarity SSIM of the anchor regions as a first to-be-selected sequence PS1; let the kth sub-region in PS1 be PS1 (k); wherein i, j and k are serial numbers;
searching all sub-regions meeting the anchoring mapping condition in the PS1 and marking as mapping domains of the anchoring domain;
|PMean(PTL,PS1(k))-GPm(1,k-1)|≥|PiMEAN(PTL,PS1)-PMax(PTL,PS1(k))|;
GPm (1, k-1) represents an average value of the similarity of the respective longest curves in the sub-regions from 1 st to k-1 st in PS1 and the longest curve in PS1 (k); PMean (PTL, PS1 (k)) is the mean of the similarity of each curve in PTL and PS1 (k); piMEAN (PTL, PS 1) is the mean of all similarities of the longest curves in each subregion in PTL and PS1; PMax (PTL, PS1 (k)) is the maximum value of the similarity of each curve in PTL and PS1 (k);
the sub-region of the mapping domain marked as anchor domain in PS1 means that the sub-region and the anchor domain have the same mapping domain mark; respectively taking sequences formed by sub-regions with the same mapping domain marks in Pic as mapping sequences of the regions;
in S500, the method for performing region indentation processing on each region mapping sequence to obtain an optimized image sequence includes the following steps:
respectively calculating the structural similarity SSIM value of each sub-region in the region mapping sequence and the corresponding anchoring region, and recording the value as CSSIM, and taking the average value of each CSSIM as the SSIM value of the region mapping sequence and recording the value as RSSIM; respectively calculating RSSIMs of all region mapping sequences; taking the average value of RSSIMs of all the region mapping sequences as an indentation threshold RT;
screening all region mapping sequences with RSSIM < RT as sequences to be optimized;
performing region indentation processing on the images in each sequence to be optimized, specifically:
screening out all sub-regions of CSSIM < RSSIM in the sequence to be optimized as defect image sub-blocks; performing the following operations on each defective image subblock:
recording Pi (u) as the u-th pixel point in the defect image subblock, u belongs to [1, TOT ], wherein TOT is the number of all pixel points in the defect image subblock, and recording all the subregions with CSSIM values of all subregions in the sequence to be optimized being larger than the CSSIM value of the defect image subblock as repair contrast regions; and acquiring pixel points at the same coordinate position as Pi (u) in each repair contrast area as repair contrast pixels, recording the average value of the pixel values of the repair contrast pixels as Pa, traversing all Pi (u), and replacing the pixel value of Pi (u) with the value of Pa.
2. The method for constructing a virtual scene according to claim 1, wherein in S100, the method for forming the image sequence by collecting a plurality of images of the scene object from different viewing angles and positions comprises: calibrating the CCD camera by a Zhang calibration method, and acquiring images of different visual angles and positions of a scene object by the calibrated CCD camera; namely, clockwise rotation is carried out around the scene object by the geometric center of the scene object at [0 degrees, 360 degrees ], and an image of the scene object is obtained every time the scene object rotates 1 to 15 degrees; the image sequence is formed from the individual acquired images in turn.
3. The method of claim 1, wherein the region indentation processing of the images in each sequence to be optimized further comprises the following steps:
screening out all sub-regions of CSSIM < RSSIM in the sequence to be optimized as defect image sub-blocks; performing the following operations on each defective image subblock: all the sub-areas with the CSSIM value of each sub-area recorded in the sequence to be optimized being larger than the CSSIM value of the defective image sub-block are taken as repair contrast areas;
recording the longest curve in the repair contrast area as a repair curve, taking a straight line passing through the end point of the repair curve as XL, recording the longest curve in the defect image subblock as a defect curve, taking a straight line passing through the end point of the repair curve as YL, projecting XL on the defect image subblock and at the same position in the repair contrast area to obtain a projection line ZL, and taking an included angle between YL and ZL as the correction angle Ale1;
calculating the average value of the correction angle Ale1 of each repair contrast area and recording as the LGA to be rotated; acquiring a repair curve with the smallest absolute value of the similarity difference between each repair curve and each defect curve as a selected curve; copying the selected curve to the same position on the defect image sub-block and in the repair contrast area, and rotating the selected curve in the direction from ZL to YL by the angle LGA as a whole; deleting the defect curve;
and recording Li (v) as the v-th pixel point on the selected curve, v belongs to [1, TOT2], TOT2 is the number of all the pixel points on the selected curve, obtaining pixel points at the same coordinate positions as Li (v) in each repair comparison region as curve repair pixels, recording the average value of the pixel values of all the curve repair pixels as La, traversing all Li (v), and replacing the pixel value of Li (v) with the value of La to finish region retraction processing.
4. The method for constructing a virtual scene according to claim 1, wherein in S600, the method for three-dimensional reconstruction includes any one of KinectFusion algorithm, SFM algorithm, R-MVSNet method, or PointMVSNet method.
5. The method according to claim 1, wherein the three-dimensional scene model is used for screen output of a display used in rehabilitation training.
6. A virtual scene constructing apparatus, the virtual scene constructing apparatus comprising: processor, memory and computer program stored in the memory and running on the processor, the processor implementing the steps in the virtual scene construction method according to claim 1 when executing the computer program, the virtual scene construction apparatus running in a computing device of a desktop computer, a notebook computer, a palm computer or a cloud data center.
7. In a rehabilitation training application, a user operates the virtual scene constructing apparatus according to claim 6 to observe a three-dimensional scene model obtained by the virtual scene constructing step implemented by a computer program, wherein the virtual scene constructing step includes:
s100, acquiring a plurality of image forming image sequences;
s200, respectively identifying curves of all images in the image sequence;
s300, carrying out edge detection on each image in the image sequence to obtain edge lines, and dividing each image in sequence into a plurality of sub-regions by the edge lines;
s400, marking mapping domains for the sub-regions of each image according to curves in all the sub-regions of each image in the image sequence in sequence, and respectively taking sequences formed by the sub-regions with the same mapping domain marks as each region mapping sequence;
s500, carrying out region indentation processing on each region mapping sequence to obtain an optimized image sequence;
s600, performing three-dimensional reconstruction on each image in the image sequence to obtain a three-dimensional scene model;
in S400, the method for mapping domain marks on the sub-regions of each image sequentially according to the curves in all the sub-regions of each image in the image sequence, and using the sequence formed by the sub-regions with the same mapping domain mark as each region mapping sequence includes the following steps:
taking the ith image in the image sequence Pic as Pic (i), and taking the jth sub-area in the Pic (i) as Pic (i, j);
in the value range of j, mapping domain marking is carried out on the sub-regions of each image according to the curves in Pic (i, j) in sequence, and the specific method is as follows: taking a sub-region Pic (i, j) as an anchor region, obtaining the longest curve in the anchor region as an anchor line PTL, and respectively obtaining a sequence formed by the sub-regions except the anchor region in each image in the Pic and all the sub-regions with the largest value of the structural similarity SSIM of the anchor region as a first to-be-selected sequence PS1; let the kth sub-region in PS1 be PS1 (k); wherein i, j and k are serial numbers;
searching all sub-regions meeting the anchoring mapping condition in the PS1 and marking as mapping domains of the anchoring domain;
|PMean(PTL,PS1(k))-GPm(1,k-1)|≥|PiMEAN(PTL,PS1)-PMax(PTL,PS1(k))|;
GPm (1, k-1) represents an average value of the similarity of the respective longest curves in the sub-regions from 1 st to k-1 st in PS1 and the longest curve in PS1 (k); PMean (PTL, PS1 (k)) is the mean of the similarity of each curve in PTL and PS1 (k); piMEAN (PTL, PS 1) is the average of all similarities of the longest curves in each subregion in PTL and PS1; PMax (PTL, PS1 (k)) is the maximum value of the similarity of each curve in PTL and PS1 (k);
the sub-region of the mapping domain marked as anchor domain in PS1 means that the sub-region and the anchor domain have the same mapping domain mark; respectively taking sequences formed by sub-regions with the same mapping domain mark in Pic as mapping sequences of each region;
in S500, the method for performing region indentation processing on each region mapping sequence to obtain an optimized image sequence includes the following steps:
respectively calculating the structure similarity SSIM value of each sub-region in the region mapping sequence and the corresponding anchoring region and recording the structure similarity SSIM value as CSSIM, and recording the SSIM value of each CSSIM as RSSIM by taking the average value of each CSSIM as the SSIM value of the region mapping sequence; respectively calculating RSSIMs of all region mapping sequences; taking the average value of RSSIMs of all region mapping sequences as an indentation threshold RT;
screening all region mapping sequences with RSSIM < RT as sequences to be optimized;
performing region indentation processing on the images in each sequence to be optimized, specifically:
screening out all sub-regions of CSSIM < RSSIM in the sequence to be optimized as defect image sub-blocks; performing the following operations on each defective image subblock:
recording Pi (u) as the u-th pixel point in the defect image subblock, u belongs to [1, TOT ], wherein TOT is the number of all pixel points in the defect image subblock, and recording all the subregions with CSSIM values of all subregions in the sequence to be optimized being larger than the CSSIM value of the defect image subblock as repair contrast regions; and acquiring pixel points at the same coordinate position as Pi (u) in each repair contrast area as repair contrast pixels, recording the average value of the pixel values of the repair contrast pixels as Pa, traversing all Pi (u), and replacing the pixel value of Pi (u) with the value of Pa.
CN202211256882.9A 2022-10-14 2022-10-14 Virtual scene construction method and rehabilitation training application thereof Active CN115346002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211256882.9A CN115346002B (en) 2022-10-14 2022-10-14 Virtual scene construction method and rehabilitation training application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211256882.9A CN115346002B (en) 2022-10-14 2022-10-14 Virtual scene construction method and rehabilitation training application thereof

Publications (2)

Publication Number Publication Date
CN115346002A CN115346002A (en) 2022-11-15
CN115346002B true CN115346002B (en) 2023-01-17

Family

ID=83957741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211256882.9A Active CN115346002B (en) 2022-10-14 2022-10-14 Virtual scene construction method and rehabilitation training application thereof

Country Status (1)

Country Link
CN (1) CN115346002B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509104A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Confidence map-based method for distinguishing and detecting virtual object of augmented reality scene
WO2018133119A1 (en) * 2017-01-23 2018-07-26 中国科学院自动化研究所 Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
CN114004939A (en) * 2021-12-31 2022-02-01 深圳奥雅设计股份有限公司 Three-dimensional model optimization method and system based on modeling software script

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8514225B2 (en) * 2011-01-07 2013-08-20 Sony Computer Entertainment America Llc Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
US8970740B2 (en) * 2012-12-14 2015-03-03 In View Technology Corporation Overlap patterns and image stitching for multiple-detector compressive-sensing camera
CN111340686B (en) * 2020-02-19 2023-05-23 华南理工大学 Virtual reality scene assessment method, system and medium with crowd bias
CN114820935A (en) * 2022-04-19 2022-07-29 北京达佳互联信息技术有限公司 Three-dimensional reconstruction method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509104A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Confidence map-based method for distinguishing and detecting virtual object of augmented reality scene
WO2018133119A1 (en) * 2017-01-23 2018-07-26 中国科学院自动化研究所 Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
CN114004939A (en) * 2021-12-31 2022-02-01 深圳奥雅设计股份有限公司 Three-dimensional model optimization method and system based on modeling software script

Also Published As

Publication number Publication date
CN115346002A (en) 2022-11-15

Similar Documents

Publication Publication Date Title
US10540576B1 (en) Panoramic camera systems
US10872420B2 (en) Electronic device and method for automatic human segmentation in image
Tursun et al. The state of the art in HDR deghosting: A survey and evaluation
CN110599421B (en) Model training method, video fuzzy frame conversion method, device and storage medium
CN108596923B (en) Three-dimensional data acquisition method and device and electronic equipment
WO2021027543A1 (en) Monocular image-based model training method and apparatus, and data processing device
US11812154B2 (en) Method, apparatus and system for video processing
Xiang et al. No-reference depth assessment based on edge misalignment errors for T+ D images
Wang et al. Stereoscopic image retargeting based on 3D saliency detection
Abd Manap et al. Disparity refinement based on depth image layers separation for stereo matching algorithms
CN116452618A (en) Three-input spine CT image segmentation method
CN115346002B (en) Virtual scene construction method and rehabilitation training application thereof
JP7312026B2 (en) Image processing device, image processing method and program
Liu et al. Using web photos for measuring video frame interestingness
WO2022188102A1 (en) Depth image inpainting method and apparatus, camera assembly, and electronic device
CN111080543B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN114004839A (en) Image segmentation method and device of panoramic image, computer equipment and storage medium
Liang et al. Video2Cartoon: A system for converting broadcast soccer video into 3D cartoon animation
Croci et al. Sharpness mismatch detection in stereoscopic content with 360-degree capability
CN111985535A (en) Method and device for optimizing human body depth map through neural network
WO2020212794A1 (en) Augmented reality implementation method
CN110415239B (en) Image processing method, image processing apparatus, medical electronic device, and medium
CN111344740A (en) Camera image processing method based on marker and augmented reality equipment
CN115953813B (en) Expression driving method, device, equipment and storage medium
CN115909446B (en) Binocular face living body discriminating method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant