CN108234986A - For treating the 3D rendering management method of myopia or amblyopia and management system and device - Google Patents
For treating the 3D rendering management method of myopia or amblyopia and management system and device Download PDFInfo
- Publication number
- CN108234986A CN108234986A CN201810054258.8A CN201810054258A CN108234986A CN 108234986 A CN108234986 A CN 108234986A CN 201810054258 A CN201810054258 A CN 201810054258A CN 108234986 A CN108234986 A CN 108234986A
- Authority
- CN
- China
- Prior art keywords
- image
- rendering
- parameter
- amblyopia
- eyes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention provides it is a kind of for treat myopia or amblyopia 3D rendering management method, first by 2D images be converted into can dynamic change 3D rendering;Then 3D rendering includes left view difference image and right anaglyph, corresponds to the left eye and right eye of human body respectively, and wherein left eye can only see left view difference image, and right eye can only see right anaglyph;In terminal plays, the parameter of left view difference image and right anaglyph can individually be adjusted the 3D rendering according to the needs of users, and the parameter regulation of one of image does not influence the parameter of another image.The terminal processing method of the present invention causes the system for the treatment of myopia or amblyopia without remaking special video again;It is designed to that the parameter of all or part of picture of left and right screen is individually adjusted; have no effect on the parameter of another picture; space, more efficient treatment amblyopia eye can be more fully selected to doctor, while the eyes of another health is protected fully to be taken exercise and protected.
Description
Technical field
The invention belongs to digital processings and medical field, are related to a kind of 3D rendering manager for being used to treat myopia or amblyopia
Method and management system and device.
Background technology
VR (Virtual Reality, virtual reality) technology can show virtual scene by equipment before user,
Including vision, the sense of hearing, interaction etc. etc..Wherein vision is to currently form the most important factor of virtual sense organ, existing VR equipment
Using human body binocular stereo vision, the different images for seeing two eyes of user, so as to just generate three-dimensional sense by parallax
3D scenes.This characteristic can be used for the various fields including medical application.
At present, the content Forming Mechanism of VR equipment is complicated, and lazy weight is presented in the content that leading to VR equipment can show, can not
The problems such as customization.
The existing method for forming VR equipment vision contents is as follows:
It is specific virtual existing to be provided for VR equipment by manually modeling or editing 3D scenes, then carry out 3D image recordings
Real content.First pass through artificial 3D modeling, recycle computer graphics calculate to be formed fixed parallax double screen content it is direct
It is supplied to VR equipment.The method, the cost of labor for building scene is very high, leads to the scale of content quantity can not effectively to be expanded
Exhibition.On the other hand, since the virtual scene of formation is recorded in advance, the virtual scene can not being customized change and more
Newly.
By introducing 2D multimedia resources (image and video), 2D resources and artificial constructed 3D scenes are combined,
It recycles and calculates to achieve the purpose that 2D media can be shown under fixed 3D virtual scenes.2D resources are fixed on specific 3D
Under virtual scene, the pseudo- 3D visual effects that 2D resources are present in 3D virtual scenes are reached, are formed by computer disparity computation
Family can be used to adjust different visual angle viewing 2D resource contents for double screen content.This method can be effectively more by existing a large amount of 2D
Media resource forms virtual scene content, but position of the 2D resources under 3D scenes be fixation can not dynamic regulation, it is impossible to
2D resource contents is made dynamically to be changed in 3D scenes.As a result, content show be also can not being customized change, so as to very greatly
Degree loses configurable, the characteristic that can be interactive of VR contents.
Although and there are 2D resources to be converted into 3D resources in the prior art now, be the absence for the treatment of required 3D rendering
Mapping function, such as translation, overturning, scaling of image etc., therefore can not be well using existing in the treatment of myopia or amblyopia
The 2D resources of some other fields.
Amblyopia is the common eye diseases in children of clinical ophthalmology, is infant period, due to various reasons such as consciousness, movement, biography
Lead and the reasons such as visual center fail to receive it is suitable regard stimulation, the visual performance decline that visual development is made to be affected and is occurred
State is mainly shown as low vision and binocular vision obstacle.
It is covering method to treat the common therapy of amblyopia, is divided into complete and part overlaid.Purpose is to cover strong eye,
It forces amblyopia eye fixation and carries out fine work.However covering method frequently can lead to the healthy eyes of amblyopia offside due to for a long time
Covering amblyopia occurs and influences the stereopsis and fusion function of eyes.
Conventional images processing method is the processing carried out for single image, without for eyes feature by same image
The function to show content in various degree is handled, it is thus impossible under the VR application scenarios different applied to eyes physiological parameter.
On the other hand, traditional images processing method does not have the characteristics of continuously adjusting image processing effect of parametrization, because
This does not adapt to the different fine-tuning capabilities under the conditions of eye, so as to be efficiently applied to use eye medical field.
Invention content
The object of the present invention is to provide be the 3D entities in 3D scenes, and pass through computer graphics by 2D resource conversions
3D is converted, and is allowed to can dynamically change (including displacement, rotation, scaling), and using dichoptic viewing in position in 3D scenes
Three-dimensional imaging mode is made, at the same eyes depending on image parameter individually adjust, with reach be directed to patient individual instances
Effectively treatment amblyopia protects the purpose of eyes.
To achieve the above object, the present invention takes following technical proposals to realize:
For treating the management method of the 3D rendering of myopia or amblyopia, including:2D images are converted into first and can dynamically be become
The 3D rendering of change;Then 3D rendering includes left view difference image and right anaglyph, corresponds to the left eye and right eye of human body respectively, wherein
Left eye can only see left view difference image, and right eye can only see right anaglyph;
The 3D rendering in terminal plays, according to the needs of users may be used by the parameter of left view difference image and right anaglyph
To be individually adjusted, the parameter regulation of one of image does not influence the parameter of another image;
Wherein 2D images be converted into can the method for 3D rendering of dynamic change be:Image contract is carried out to 2D images, will be taken out
Result is taken to be placed in 3D coordinate systems by texture mapping operation, it is dynamic with rendering realization that motion process is then carried out in 3D coordinate systems
State changes, and first carries out 3D coordinates conversion calculating and carries out 2D image projections by projecting to calculate to be formed again;By calculating parallax in real time,
The multimedia content of 2D images by way of movable plane is incorporated into 3D scenes, makes it can be in 3D virtual scenes according to pre-
Determine the dynamic change that mode carries out 3D rendering.
Further, the dynamic change of 3D rendering is one or more in translation, rotation, overturning or the scaling of image.
Further, the 2D images be converted into can the method for 3D rendering of dynamic change include the following steps:
Step a carries out image contract to 2D images;
Step b chooses required 3D virtual scenes;
Step c builds 2D the resource planes in 3D virtual scenes, and builds the first of place coordinate system for its 2D the resource plane
Beginning position, angle and size;
Step d builds the intrinsic coordinates system of 2D the resource planes, by parsing the feature of 2D resource images, carries out coordinate and reflects
Calculating is penetrated, 2D resource images are converted into the textures of 2D the resource planes;
2D multimedia content by calculating parallax in real time, is incorporated 3D scenes by step e by way of movable plane,
It is allowed to that configurable 3D figures conversion can be carried out.
Further, in step e, 2D resource entities are converted by configurable variation pattern in 3D scenes, are counted
Calculation process is as follows:
Obtain plane initial coordinate matrix and current coordinate conversion conversion coordinates matrix;
Configured plane motion track is obtained, and calculates the transition matrix of current generation, and is allowed to smooth;
The coordinate that the resource plane is carried out under 3D scenes calculates, to obtain the coordinate of the resource plane under next frame, and again
Calculate the reflecting effect of light;
Current eyes are calculated in virtual scene by the motion sensor data of VR equipment and the configuration data of equipment
Position and visual angle;
It is calculated under next frame including 2D contents by the resource plane position of next frame and the virtual location of eyes
The projection relation of all objects under 3D scenes forms the double screen with parallax effect.
Further, the parameter of the left view difference image and right anaglyph for left view difference image and right anaglyph all or
The size of part picture, brightness, gray scale, saturation degree are one or more in contrast.
Further, the parameter of the left view difference image and right anaglyph part picture immobilizes, and when adjusting only changes
The parameter of setting section.
Further, the method for independent adjustment parameter is carries out configurable content shelter according to right and left eyes parameter setting
Reason, includes the following steps:
The screen of VR equipment eyes is obtained, and is translated into image
Content is subjected to left and right split screen, and carry out configurable content masking according to right and left eyes parameter and handle.
For right and left eyes parameter configuration, color, light and shade, the independent process of contrast can be carried out to right and left eyes image.
For right and left eyes parameter configuration, the independent process of image size can be carried out to right and left eyes image.
Export right and left eyes image.
Further, carrying out the step of configurable content covers processing according to right and left eyes parameter is:
Step 1, the content that copies picture is two parts of right and left eyes;
Step 2, right and left eyes parameter is mapped to masking percentage P1, the P2 of the two parts of contents in left and right, wherein eyes cover
The sum of content P1+P2<100%;
Step 3, picture pixels summation X is obtained, the unit-sized of mosaic is calculated according to configuration, m*n pixel squares are one
A fundamental unit;
Step 4, Y fundamental unit is selected at random as mosaic square, makes the total pixel Y*m*n=X* (P1+ of mosaic
P2);
Step 5, according to P1:The image that P2 values will choose masking square to be randomly assigned to left and right eyes in proportion.
System is managed for treating the 3D rendering of myopia or amblyopia the present invention also provides a kind of, including image extraction module,
Computing module, 3D processing modules, parameter adjustment module and VR equipment;Wherein image extraction module carries out image pumping to 2D images
It takes, computing module first carries out the conversion of 3D coordinates and calculates, by calculating parallax in real time, using 3D processing modules by more matchmakers of 2D images
Hold in vivo and 3D scenes incorporated by way of movable plane, make its can in the 3D virtual scenes of VR equipment according to predetermined way into
The dynamic change of row 3D rendering;Parameter adjustment module is by carrying out configurable content shelter according to right and left eyes parameter setting
Reason realizes the independent adjusting of the parameter of left view difference image and right anaglyph.
The present invention also provides a kind of device for treating amblyopia, described device includes Wearable, is equipped in the equipment
Above-mentioned manages system for treating the 3D rendering of myopia or amblyopia.
Compared with prior art, the present invention has the following advantages:
2D multimedia content is incorporated 3D scenes by the present invention by way of movable plane, to realize quick VR contents production
It is raw;
By using the real-time method for calculating parallax, 2D multimedia content is incorporated 3D by way of movable plane
Scape is allowed to that configurable 3D graphics conversion can be carried out, i.e., multimedia content can carry out in virtual scene according to predetermined way
The changes such as translation, rotation, overturning, scaling;
The computational methods of 2D projections are carried out again by first carrying out the conversion calculating of 3D coordinates, under the support of equipment computing capability
The dynamic optical oomputing of virtual scene can be carried out, the movement of 2D multimedia content can be made more true under virtual scene.
The parameter adjusting method of the 3D rendering present invention of the present invention is designed to left and right screen according to the own characteristic of amblyopia patient
Size, brightness, gray scale, color saturation and contrast of all or part of picture of curtain etc. are individually adjusted, and are had no effect on another
The parameter of a picture can more fully select space to doctor, for the situation of individual patients, more efficient treatment amblyopia
Eye, while the eyes of another health is protected fully to be taken exercise and protected.
Specifically, the present invention can independently carry out the processing of content for binocular vision, using a variety of crippled modes with reference to next
Form the medical procedure of customizable;Various reduction processes are realized all can carry out continuously linear adjustment by eyes parameter, can press
Medical progress is finely adjusted adaptation.
By watching the 3D rendering of the present invention, children with amblyopia fixation can be tempered, positioning, identifies, follow, searching etc. and is related
Aspect technical ability.Under the support of body-sensing interaction technique, making children with amblyopia association inspection operation by training, (action, eye, brain are assisted
Adjust), visual skill is grasped, establishes eye impressions, forms visual memory, cerebral cortex optic element is promoted and influence is processed
Journey, so as to promote the raising of their vision operational capability.
Specific embodiment
With reference to specific embodiment, the present invention will be described in detail.
In one embodiment of the invention, common 2D resources are converted into 3D resources, are as follows:
Image contract is carried out to 2D resources (particularly video) in VR software systems;
VR software systems, which import, chooses 3D virtual scenes;2D the resource planes are built in 3D virtual scenes, and (size is fitted
In plane), and be its structure where coordinate system initial position, angle and size;
The intrinsic coordinates system of 2D the resource planes is built, by parsing the feature of 2D resource images, carries out coordinate mapping calculation,
2D resource images are converted into the textures of 2D the resource planes;
2D resource entities can be converted by configurable variation pattern in 3D scenes, and calculating process is as follows:
Obtain plane initial coordinate matrix and current conversion coordinates matrix;
Plane initial coordinate be the time started be 0 when coordinate;
Calculate the transition matrix in each stage;
Transition matrix be from time point 0 (at the beginning of definition) to current point in time this period in displacement, rotation
Wait the summation of operations, the coordinate of initial coordinate * transition matrixes=current.
With regard in one embodiment of the invention if video has 100 frames, that if each frame carry out a location updating if,
99 location updatings are just had, also just there are 99 transition matrixes.
Configured plane motion track is obtained, and calculates the transition matrix of current generation, and is allowed to smooth;
For example, being at the uniform velocity moved to bottom right from upper left " such it is intended to become 99 transition matrixes (each to represent to move to right
Little by little, a little is moved down).
It is exactly by software configuration to obtain configured plane motion track, can predefine movement effects, such as smoothly
From left to right, while at random above and below, while rotation accelerated etc., movement locus herein refers to the effect that user intentionally gets
Fruit.
The coordinate that the resource plane is carried out under 3D scenes calculates, and (is related to putting down to obtain the coordinate of the resource plane under next frame
Move, rotate and scale), and recalculate the reflecting effect of light;
This step be exactly by " changing coordinates " * " next stage transition matrix ", the result is that next frame it should coordinate, so
The calculating of light is carried out further according to new coordinate afterwards.Handled by the relative position and angle of current plane and light source such as point light source or
Reflecting effect under source of parallel light.
Current eyes are calculated by the motion sensor data of VR equipment and the configuration data (interpupillary distance etc.) of distinct device
Position and visual angle in virtual scene;
The exercise data that the motion sensor of VR equipment obtains, the main angle for representing user's rotary head, angle of new line etc.
Deng.Using user perspective as input, consider interpupillary distance, coordinate and direction of the right and left eyes in 3D scenes is calculated, by by 3D
The projection that scape carries out right and left eyes calculates to form final image.
It is calculated under next frame including 2D contents by the resource plane position of next frame and the virtual location of eyes
The projection relation of all objects under 3D scenes forms the double screen with parallax effect.
The present invention specifically provides a kind of device for treating amblyopia, and described device includes Wearable, 3D in the equipment
Picture system includes display screen, convex lens eyeglass and output left view difference image on a display screen and right anaglyph;It is wherein left
Anaglyph and right anaglyph are divided equally on a display screen, and left view difference image and right anaglyph are distributed into axial symmetry;The left side
The parameter of anaglyph and right anaglyph is individually adjusted according to the needs of users.
Wherein described headset equipment is worn on user's head, and lens or lens are respectively equipped in corresponding human body eyes position
Group.
Display can be the general display for being arranged on headset equipment corresponding position for playing 3D rendering,
It can be common smart mobile phone screen, or retinal projection's formula display.
By taking common smart mobile phone common at present as an example, one accommodating chamber is set in headset equipment corresponding position, it will
Mobile phone is put into, and mobile phone display screen is directed at the lens or lens group position of headset equipment.Preset 3D rendering can be stored in mobile phone
In, it clicks and plays.
The left view difference image and right anaglyph make respectively according to human body binocular parallax principle, and the two has micro-
The image of small difference, left view difference image and right anaglyph are distributed into axial symmetry, and a wherein width picture is respectively seen in human body eyes,
Then fusion, which obtains a width, has the image of stereoscopic effect.By the way that same picture is divided into left view difference image and right anaglyph,
Allow to simulate remote target with close-target, make our eyes in near-ambient encirclement with regard to that can obtain seeing remote and see near coordination
It takes exercise.
3D rendering in the present invention is bisected into left view difference image and right anaglyph among picture.Using smart mobile phone as
Example, smart mobile phone screen is laterally disposed, left screen and right screen then are bisected into from centre, left view difference image is played respectively and the right side regards
Difference image.
The left and right eyepiece of convex lens glasses is convex lens, and left and right eyepiece center corresponds to two screens in left and right respectively.
Left and right screen can adjust related software parameters, the software by the various control modes such as touch screen, button or knob
Information and the size to all or part of picture of left and right screen, brightness, gray scale, color are obtained according to the sensor of system support
Saturation degree and contrast etc. make corresponding adjustment, and the image between left and right screen after the adjustment can be right by 3D rendering system
Amblyopia patient carries out treatment training.
The left view difference image and right anaglyph of the present invention can individually be adjusted parameter.One embodiment of the invention
In, by terminal device, such as the software on mobile phone, 3D rendering is played, which can adequately select simultaneously to client, you can
The individually parameters of selection left view difference image and right anaglyph.Or with control and regulation such as bluetooth approach connection remote controls
Parameters.
For example, when the parameter of left view difference image is adjusted from minimum value to maximum value, the parameter of right anaglyph can be kept
The parameter of original image does not change.Or when the area parameters of left view difference image are from the maximum value of raw frames 100%, press
When adjusted in proportion to 70% minimum big value, the area parameters of right anaglyph can keep the original size of original image not send out
Changing.
In one embodiment of the invention, processing method of the invention is carries out in configurable according to right and left eyes parameter setting
Hold masking processing, include the following steps:
The content that copies picture is two parts of right and left eyes;
Right and left eyes parameter is mapped to masking percentage P1, P2 (the sum of content of eyes masking P1+ of the two parts of contents in left and right
P2<100%);
Picture pixels summation X is obtained, (m*n pixel squares are a basis according to the unit-sized of configuration calculating mosaic
Unit);
Y fundamental unit is selected as mosaic square at random, makes the total pixel Y*m*n=X* (P1+P2) of mosaic;
According to P1:The image that P2 values will choose masking square to be randomly assigned to left and right eyes in proportion;
For square content, the mode in step 3 can be used to carry out Weakening treatment.
Specifically, for right and left eyes parameter configuration, independent process can be carried out to right and left eyes image.Wherein each seed ginseng of image
Number regulating step can be carried out freely in any order, and the adjusting of each parameter is independent of each other each other, and without restriction sequence.This hair
It is bright that several specific embodiments are as follows, how can be adjusted for different parameters illustration.
The independent adjusting of 1 color of embodiment
It is as follows:
1. obtain eye parameters
2. obtain each pixel numerical value (the existing upper expression method there are many pixel color, the calculating below of image
It is illustrated at method with RGB statements)
3. the numerical value of pair each pixel calculates, specific as follows:
Parameter is configured according to right and left eyes, adjusts the colouring intensity of image, forms the image color intensity that right and left eyes receive
Difference.Its detailed process be by image carry out pixel scale calculating so that image showed according to different parameters it is different
Color effect, method are as follows:
Input parameter is the information strength parameter percentage P ([0,1] range) of the left and right two
It is characterized in that when P is 1, there is image original color to present, and when P is 0, image is rendered as gray level image, P
When changing for median, the effect variation of image changes linearly relationship
This method carries out the color Weakening treatment of image using the mapping mode of pixel scale, empty with the RGB color of image
Between be described for model, (other models such as HSV etc., expression method is different, but its mathematical computations essence is similar).
Its rgb value is obtained for each pixel of image, and is replaced with the mapping value of response, meets features above
There are many calculations for the pixel processing method of specific image ashing, and typical mapping method example is as follows (but not containing only):
(R,G,B)->(R*P+(R+G+B)/3*(1-P),G*P+(R+G+B)/3*(1-P),B*P+(R+G+B)/3*(1-
P))。
Parameter is configured according to right and left eyes, adjusts the light and shade intensity of image, forms the image light and shade intensity that right and left eyes receive
Difference.Its detailed process be by image carry out pixel scale calculating so that image showed according to different parameters it is different
Brightness effects, method are as follows:
Input parameter is the information strength parameter percentage P ([0,1] range) of the left and right two
It is characterized in that when P is 1, there is image original color to present, and when P is 0, image is rendered as pure black image, P
When changing for median, the effect variation of image changes linearly relationship
This method carries out the color Weakening treatment of image using the mapping mode of pixel scale, empty with the RGB color of image
Between be described for model, (other models such as HSV etc., expression method is different, but its mathematical computations essence is similar);
Its rgb value is obtained for each pixel of image, and is replaced with the mapping value of response, meets features above
There are many calculations for the pixel processing method that specific image darkens, and typical mapping method example is as follows (but not containing only):
(R,G,B)->(R*P,G*P,B*P)。
The independent adjusting of 2 contrast of embodiment
It is as follows:
1. obtain eye parameters
2. obtain each pixel numerical value of image
3. the numerical value of pair each pixel calculates, specific as follows:
Parameter is configured according to right and left eyes, adjusts the contrast intensity of image, the image light and shade intensity shape for receiving right and left eyes
Into difference.Its detailed process is by carrying out pixel scale calculating to image, so that image shows difference according to different parameters
Brightness effects, method is as follows:
Input parameter is the information strength parameter percentage P ([0,1] range) of the left and right two
It is characterized in that redistributing the colour gamut of image using P values, the color contrast of the bigger image of P values is more apparent, P values
Difference, which corresponds to, between the color gamut and pixel of smaller image reduces.When P is 1, there is image original color to present, and work as P
When being 0, image is rendered as gray image, and when P changes for median, the effect variation of image changes linearly relationship
This method carries out the comparison Weakening treatment of image using the mapping mode of pixel scale, empty with the RGB color of image
Between be described for model, (other models such as HSV etc., expression method is different, but its mathematical computations essence is similar)
Its rgb value is obtained for each pixel of image, and is replaced with the mapping value of response, meets features above
There are many calculations for the pixel processing method of specific image comparison reduction, and typical mapping method example is as follows (but not containing only):
(R,G,B)->((R+Max*(1-P))/2,(G+Max*(1-P))/2,(B+Max*(1-P))/2)
Wherein Max values are the maximum occurrences upper limit of RGB, can be considered 1 after normalization.
The independent adjusting of 3 image size of embodiment
It is as follows:
1. obtain eye parameters
2. obtain each pixel numerical value of image
3. the numerical value of pair each pixel calculates, specific as follows:
Parameter is configured according to right and left eyes, adjusts the size of image, the image size that right and left eyes receive is made to form difference.
Input parameter is the information strength parameter percentage P ([0,1] range) of the left and right two
It is characterized in that the bigger image area of P values is bigger, the smaller image area of P values is smaller.When P is 1, image has original
Size present, when P be 0 when, image area 0, P for median change when, image effect variation changes linearly relationship.
The scaling that this method carries out image using interpolation method calculates, and process is as follows:
The size (long and wide) of picture after scaling is recalculated by ratio;
Calculate each respective coordinates of the pixel in the picture of source in Target Photo;
It is carried out according to pixel coordinate, the interpolation calculation (such as neighbor interpolation, two-wire interpolation etc.) of Target Photo pixel color;
Export target image.
Although the invention has been described by way of example and in terms of the preferred embodiments, but it is not for limiting the present invention, any this field
Technical staff without departing from the spirit and scope of the present invention, may be by the masking device of the disclosure above and technology contents pair
Technical solution of the present invention makes possible variation and modification, therefore, every content without departing from technical solution of the present invention, according to this
The technical spirit any simple modifications, equivalents, and modifications made to the above embodiment of invention, belong to the technology of the present invention
The protection domain of scheme.
Claims (10)
1. for treating the management method of the 3D rendering of myopia or amblyopia, which is characterized in that be first converted into 2D images movably
The 3D rendering of state variation;Then 3D rendering includes left view difference image and right anaglyph, corresponds to the left eye and right eye of human body respectively,
Wherein left eye can only see left view difference image, and right eye can only see right anaglyph;
For the 3D rendering in terminal plays, the parameter of left view difference image and right anaglyph according to the needs of users can be single
It is solely adjusted, the parameter regulation of one of image does not influence the ginseng of another image
Number;
Wherein 2D images be converted into can the method for 3D rendering of dynamic change be:Image contract is carried out to 2D images, is tied extracting
Fruit is placed in by texture mapping operation in 3D coordinate systems, and motion process is then carried out in 3D coordinate systems and realizes that dynamic becomes with rendering
Change, first carry out 3D coordinates conversion calculating and carry out 2D image projections by projecting to calculate to be formed again;By calculating parallax in real time, by 2D
The multimedia content of image incorporates 3D scenes by way of movable plane, makes it can be in 3D virtual scenes according to predetermined party
Formula carries out the dynamic change of 3D rendering.
2. the management method of the 3D rendering according to claim 1 for being used to treat myopia or amblyopia, which is characterized in that 3D schemes
The dynamic change of picture is one or more in translation, rotation, overturning or the scaling of image.
3. the management method of the 3D rendering according to claim 1 for being used to treat myopia or amblyopia, which is characterized in that described
2D images be converted into can the method for 3D rendering of dynamic change include the following steps:Step a carries out image contract to 2D images;
Step b chooses required 3D virtual scenes;
Step c builds 2D the resource planes in 3D virtual scenes, and the initial bit of place coordinate system is built for its 2D the resource plane
It puts, angle and size;
Step d builds the intrinsic coordinates system of 2D the resource planes, by parsing the feature of 2D resource images, carries out coordinate mapping meter
It calculates, 2D resource images is converted into the textures of 2D the resource planes;
2D multimedia content by calculating parallax in real time, by way of movable plane is incorporated 3D scenes, is allowed to by step e
It can carry out configurable 3D figures conversion.
4. the management method of the 3D rendering according to claim 3 for being used to treat myopia or amblyopia, which is characterized in that step
In e, 2D resource entities are converted by configurable variation pattern in 3D scenes, and calculating process is as follows:
Obtain plane initial coordinate matrix and current coordinate conversion conversion coordinates matrix;
Configured plane motion track is obtained, and calculates the transition matrix of current generation, and is allowed to smooth;
The coordinate that the resource plane is carried out under 3D scenes calculates, and to obtain the coordinate of the resource plane under next frame, and recalculates
The reflecting effect of light;
Position of the current eyes in virtual scene is calculated by the motion sensor data of VR equipment and the configuration data of equipment
It puts and visual angle;
The 3D fields under next frame including 2D contents are calculated by the resource plane position of next frame and the virtual location of eyes
The projection relation of all objects under scape forms the double screen with parallax effect.
5. the management method of the 3D rendering according to claim 1 for being used to treat myopia or amblyopia, which is characterized in that described
Size, bright of the parameter of left view difference image and right anaglyph for left view difference image and right anaglyph all or part picture
It spends, gray scale, saturation degree, it is one or more in contrast.
6. the management method of the 3D rendering according to claim 1 for being used to treat myopia or amblyopia, which is characterized in that described
The parameter of left view difference image and right anaglyph part picture immobilizes, and when adjusting only changes the parameter of setting section.
7. the management method of the 3D rendering according to claim 1 for being used to treat myopia or amblyopia, which is characterized in that individually
The method of adjustment parameter is to carry out configurable content masking according to right and left eyes parameter setting to handle, and is included the following steps:
The screen of VR equipment eyes is obtained, and is translated into image;
Content is subjected to left and right split screen, and carry out configurable content masking according to right and left eyes parameter and handle;
For right and left eyes parameter configuration, color, light and shade, the independent process of contrast can be carried out to right and left eyes image;
For right and left eyes parameter configuration, the independent process of image size can be carried out to right and left eyes image;
Export right and left eyes image.
8. the management method of the 3D rendering according to claim 7 for being used to treat myopia or amblyopia, which is characterized in that according to
Right and left eyes parameter carries out the step of configurable content masking processing and is:
Step 1, the content that copies picture is two parts of right and left eyes;
Step 2, right and left eyes parameter is mapped to the content of masking percentage P1, the P2 of the two parts of contents in left and right, wherein eyes masking
The sum of P1+P2<100%;
Step 3, picture pixels summation X is obtained, the unit-sized of mosaic is calculated according to configuration, m*n pixel squares are a base
Plinth unit;
Step 4, Y fundamental unit is selected at random as mosaic square, makes the total pixel Y*m*n=X* (P1+P2) of mosaic;
Step 5, according to P1:The image that P2 values will choose masking square to be randomly assigned to left and right eyes in proportion.
9. a kind of manage system for treating the 3D rendering of myopia or amblyopia, which is characterized in that the 3D video relationships system packet
Include image extraction module, computing module, 3D processing modules, parameter adjustment module and VR equipment;Wherein image extraction module is to 2D
Image carries out image contract, and computing module first carries out the conversion of 3D coordinates and calculates, and by calculating parallax in real time, utilizes 3D processing modules
The multimedia content of 2D images by way of movable plane is incorporated into 3D scenes, makes it can be in the 3D virtual scenes of VR equipment
The dynamic change of 3D rendering is carried out according to predetermined way;Parameter adjustment module according to right and left eyes parameter setting by can configure
Content masking processing, realize the independent adjusting of the parameter of left view difference image and right anaglyph.
10. a kind of device for treating amblyopia, described device includes Wearable, is equipped in the equipment such as claim 9 institute
That states manages system for treating the 3D rendering of myopia or amblyopia.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810054258.8A CN108234986B (en) | 2018-01-19 | 2018-01-19 | For treating the 3D rendering management method and management system and device of myopia or amblyopia |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810054258.8A CN108234986B (en) | 2018-01-19 | 2018-01-19 | For treating the 3D rendering management method and management system and device of myopia or amblyopia |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108234986A true CN108234986A (en) | 2018-06-29 |
CN108234986B CN108234986B (en) | 2019-03-15 |
Family
ID=62668067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810054258.8A Active CN108234986B (en) | 2018-01-19 | 2018-01-19 | For treating the 3D rendering management method and management system and device of myopia or amblyopia |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108234986B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111202663A (en) * | 2019-12-31 | 2020-05-29 | 浙江工业大学 | Vision training learning system based on VR technique |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120275709A1 (en) * | 2011-04-28 | 2012-11-01 | Institute For Information Industry | Building texture extracting apparatus and method thereof |
CN103384340A (en) * | 2013-06-28 | 2013-11-06 | 中南大学 | Method for obtaining 3D imaging image from single 2D image |
WO2014070814A2 (en) * | 2012-11-01 | 2014-05-08 | X6D Limited | Glasses for amblyopia treatment |
CN104618710A (en) * | 2015-01-08 | 2015-05-13 | 左旺孟 | Dysopia correction system based on enhanced light field display |
-
2018
- 2018-01-19 CN CN201810054258.8A patent/CN108234986B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120275709A1 (en) * | 2011-04-28 | 2012-11-01 | Institute For Information Industry | Building texture extracting apparatus and method thereof |
WO2014070814A2 (en) * | 2012-11-01 | 2014-05-08 | X6D Limited | Glasses for amblyopia treatment |
CN103384340A (en) * | 2013-06-28 | 2013-11-06 | 中南大学 | Method for obtaining 3D imaging image from single 2D image |
CN104618710A (en) * | 2015-01-08 | 2015-05-13 | 左旺孟 | Dysopia correction system based on enhanced light field display |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111202663A (en) * | 2019-12-31 | 2020-05-29 | 浙江工业大学 | Vision training learning system based on VR technique |
Also Published As
Publication number | Publication date |
---|---|
CN108234986B (en) | 2019-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108271011B (en) | For treating the parameter processing method and system and device of the 3D rendering system of amblyopia | |
JP7094266B2 (en) | Single-depth tracking-accommodation-binocular accommodation solution | |
US11061240B2 (en) | Head-mountable apparatus and methods | |
CN106484116B (en) | The treating method and apparatus of media file | |
JP6870080B2 (en) | Image generator, image display system, and image generation method | |
CN106491323B (en) | For treating the video system and device of amblyopia | |
CN104618710B (en) | Dysopia correction system based on enhanced light field display | |
CN105959664B (en) | The dynamic adjustment of predetermined three-dimensional video setting based on scene content | |
CN105629469B (en) | Head-mounted display apparatus based on liquid crystal lens array | |
CN107272200A (en) | A kind of focal distance control apparatus, method and VR glasses | |
WO2003079272A1 (en) | Materials and methods for simulating focal shifts in viewers using large depth of focus displays | |
CN107669455A (en) | A kind of vision training method, device and equipment | |
US11570426B2 (en) | Computer-readable non-transitory storage medium, web server, and calibration method for interpupillary distance | |
CN114599266A (en) | Light field vision-based test device, adjusted pixel rendering method therefor, and online vision-based test management system and method using same | |
CN110021445A (en) | A kind of medical system based on VR model | |
JP6821646B2 (en) | Virtual / augmented reality system with dynamic domain resolution | |
WO2018176927A1 (en) | Binocular rendering method and system for virtual active parallax computation compensation | |
CN106249407A (en) | Prevention and the system of myopia correction | |
Vasylevska et al. | Towards eye-friendly vr: how bright should it be? | |
CN108143596A (en) | A kind of wear-type vision training instrument, system and training method | |
CN108064447A (en) | Method for displaying image, intelligent glasses and storage medium | |
CN108234986B (en) | For treating the 3D rendering management method and management system and device of myopia or amblyopia | |
US11107276B2 (en) | Scaling voxels in a virtual space | |
CN108053495A (en) | 2D digital resources be converted into can dynamic change 3D digital resources method and system | |
CN107929006A (en) | A kind of vision training method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |