CN110378914A - Rendering method and device, system, display equipment based on blinkpunkt information - Google Patents
Rendering method and device, system, display equipment based on blinkpunkt information Download PDFInfo
- Publication number
- CN110378914A CN110378914A CN201910663137.8A CN201910663137A CN110378914A CN 110378914 A CN110378914 A CN 110378914A CN 201910663137 A CN201910663137 A CN 201910663137A CN 110378914 A CN110378914 A CN 110378914A
- Authority
- CN
- China
- Prior art keywords
- display area
- blinkpunkt
- display
- user
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
This application discloses a kind of rendering method based on blinkpunkt information and device, system, display equipment.Wherein, this method comprises: obtaining the eye feature data of user;The blinkpunkt information of user is determined according to eye feature data;According to display area blinkpunkt corresponding with blinkpunkt information, display area is divided into different display subregions;Object in different display subregions is rendered.Present application addresses being importance according to rendering objects mostly due to existing LOD technology, with a distance from camera or movement velocity determines the technical issues of can not preferably optimizing the performance of figure rendering caused by the rending model using which kind of precision.
Description
Technical field
This application involves figures to render field, in particular to a kind of rendering method and dress based on blinkpunkt information
It sets, system, display equipment.
Background technique
LOD (Levels Of Details, abbreviation LOD) technology, means detail.LOD technology refers to according to object
The node of model is the location of in display environment and different degree, the resource allocation of decision object rendering reduce insignificant object
The face number and degrees of detail of body, to obtain efficient rendering operation.
Fig. 1 is according to a kind of schematic diagram of object rending model of the embodiment of the present application, is three different oil drums in Fig. 1
Moulding has respectively represented Gao Mo by left-to-right, the model structure of middle mould and low mould, and the face number of model is from more to few.When from far
Place viewing rendering model when, it is only necessary to the profile of oil drum can be shown with low mould;When from immediating vicinity viewing wash with watercolours
The model of dye, when needing to see more details clearly, it is necessary to which the number rending models that obtain relatively in face can just show the profile of oil drum more.
Existing LOD technology is the importance according to rendering objects mostly, and with a distance from camera or movement velocity determines
Using the rending model of which kind of precision.Using these types of mode can partially reduce figure rendering load, but for distance compared with
Closely, but the object that is not concerned with of user, during carrying out figure rendering to it, it is not used low accuracy model into
Row rendering, can not preferably optimize the performance of figure rendering.
For above-mentioned problem, currently no effective solution has been proposed.
Summary of the invention
The embodiment of the present application provides a kind of rendering method based on blinkpunkt information and device, system, display equipment, with
It at least solves due to existing LOD technology to be the importance according to rendering objects mostly, with a distance from camera or movement velocity is come
Determine the technical issues of can not preferably optimizing the performance of figure rendering caused by the rending model using which kind of precision.
According to the one aspect of the embodiment of the present application, a kind of rendering method based on blinkpunkt information is provided, comprising: obtain
Take the eye feature data at family;The blinkpunkt information of user is determined according to eye feature data;According to display area with watch attentively
Display area, is divided into different display subregions by the corresponding blinkpunkt of point information;To the mesh in different display subregions
Mark object is rendered.
Optionally, according to display area blinkpunkt corresponding with blinkpunkt information, display area is divided into different show
Show subregion, comprising: multiple display subregions are determined in display area;Edge and blinkpunkt according to multiple display subregions
Distance by it is multiple display sub-zone dividings at different levels of sharpness display area.
Optionally, according to it is multiple display subregions edges at a distance from blinkpunkt by multiple display sub-zone dividings at not
With the display area of levels of sharpness, comprising: if distance is less than or equal to first threshold, which is divided into
First display area;If distance is greater than first threshold and is less than or equal to second threshold, which is divided into
Second display area;If distance is greater than second threshold, which is divided into third display area, wherein first
The levels of sharpness of display area, the second display area and third display area successively reduces.
Optionally, according to it is multiple display subregions edges at a distance from blinkpunkt by multiple display sub-zone dividings at not
With the display area of levels of sharpness, comprising: using blinkpunkt as the center of circle, determine two border circular areas, wherein the second border circular areas
Radius be greater than the first border circular areas radius, radius be object where display area edge it is corresponding with blinkpunkt information
Blinkpunkt distance;Using the first border circular areas as the first display area;The edge of first border circular areas and second is round
The annular region of the edge composition in region is as the second display area;The first display area and the second display will be removed in display area
Display area other than region is as third display area, wherein the first display area, the second display area and third viewing area
The levels of sharpness in domain successively reduces.
Optionally, blinkpunkt information includes at least one of: the position of the blinkpunkt of user, user direction of visual lines.
Optionally, the object in different display subregions is rendered, comprising: the son display high to levels of sharpness
Face number needed for constructing rending model when object in region is rendered is aobvious more than to the son that levels of sharpness is low with points
Show face number and points needed for constructing rending model when the object in region is rendered, rending model is to carry out to object
The model for needing to construct when rendering.
According to the another aspect of the embodiment of the present application, a kind of rendering device based on blinkpunkt information is additionally provided, comprising:
Module is obtained, for obtaining the eye feature data of user;Determining module, for determining the note of user according to eye feature data
View information;Division module, for according to display area blinkpunkt corresponding with blinkpunkt information, display area to be divided into not
Same display subregion;Rendering module, for being rendered to the object in different display subregions.
According to the another aspect of the embodiment of the present application, a kind of rendering system based on blinkpunkt information is additionally provided, comprising:
It shows equipment, is used for displaying target object;Eye movement tracing equipment, for obtaining the eye feature data of user;According to eye feature
Data determine the blinkpunkt information of user;The blinkpunkt information of user is sent to processor;Processor, with eye movement tracing equipment
Communication connection, for according to display area blinkpunkt corresponding with blinkpunkt information, display area to be divided into different displays
Subregion;Object in different display subregions is rendered.
According to the embodiment of the present application in another aspect, additionally providing a kind of display equipment, comprising: display screen, for opening up
Show object;Processor, for obtaining the eye feature data of user;The blinkpunkt letter of user is determined according to eye feature data
Breath;According to display area blinkpunkt corresponding with blinkpunkt information, display area is divided into different display subregions;To not
Object in same display subregion is rendered.
According to the embodiment of the present application in another aspect, additionally providing a kind of storage medium, storage medium includes the journey of storage
Sequence, wherein equipment when program is run where control storage medium executes above rendering method based on blinkpunkt information.
According to the embodiment of the present application in another aspect, additionally providing a kind of processor, processor is used to run program,
In, above rendering method based on blinkpunkt information is executed when program is run.
In the embodiment of the present application, using the eye feature data for obtaining user;User is determined according to eye feature data
Blinkpunkt information;According to display area blinkpunkt corresponding with blinkpunkt information, display area is divided into different displays
Subregion;To the mode that the object in different display subregions is rendered, by obtaining the blinkpunkt information of user, root
According to the rending model of the blinkpunkt information selection rendering objects of user, the rendering objects positioned at blinkpunkt central area are using high-precision
Spend rending model;The rendering objects in a little non-central region are look at using low precision rending model.Reach and has further decreased figure
The purpose of the load of shape rendering optimizes figure rendering to realize when carrying out figure rendering using LOD technical object model
Performance technical effect, and then solve since existing LOD technology is the importance according to rendering objects mostly, from camera
Distance or movement velocity determine the property that can not preferably optimize figure rendering caused by the rending model using which kind of precision
The technical issues of energy.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present application, constitutes part of this application, this Shen
Illustrative embodiments and their description please are not constituted an undue limitation on the present application for explaining the application.In the accompanying drawings:
Fig. 1 is the schematic diagram according to a kind of object rending model of the embodiment of the present application;
Fig. 2 is the flow chart according to a kind of rendering method based on blinkpunkt information of the embodiment of the present application;
Fig. 3 is the schematic diagram according to a kind of display screen of display equipment of the embodiment of the present application;
Fig. 4 a is to carry out display area division to display screen using blinkpunkt information according to a kind of of the embodiment of the present application
Schematic diagram;
Fig. 4 b is to carry out display area to display screen using blinkpunkt information according to the another kind of the embodiment of the present application to draw
The schematic diagram divided;
Fig. 5 is the structure chart according to a kind of rendering device based on blinkpunkt information of the embodiment of the present application;
Fig. 6 is according to the structure chart of rendering system based on blinkpunkt information of the embodiment of the present application a kind of;
Fig. 7 is the structure chart according to a kind of display equipment of the embodiment of the present application.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application
Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only
The embodiment of the application a part, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people
Member's every other embodiment obtained without making creative work, all should belong to the model of the application protection
It encloses.
It should be noted that the description and claims of this application and term " first " in above-mentioned attached drawing, "
Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way
Data be interchangeable under appropriate circumstances, so as to embodiments herein described herein can in addition to illustrating herein or
Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover
Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to
Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product
Or other step or units that equipment is intrinsic.
According to the embodiment of the present application, a kind of embodiment of the method for rendering method based on blinkpunkt information is provided, is needed
Illustrate, step shown in the flowchart of the accompanying drawings can be in a computer system such as a set of computer executable instructions
It executes, although also, logical order is shown in flow charts, and it in some cases, can be to be different from herein suitable
Sequence executes shown or described step.
Fig. 2 is according to a kind of flow chart of rendering method based on blinkpunkt information of the embodiment of the present application, such as Fig. 2 institute
Show, this method comprises the following steps:
Step S202 obtains the eye feature data of user.
In some embodiments of the present application, the above-mentioned rendering method based on blinkpunkt can be applied to virtual reality technology
(Virtual Reality, VR), mould group acquisition user can be tracked by the eye movement being arranged in VR equipment by executing step S202
Eye feature data, eye feature data may include: pupil position, pupil shape, iris position, iris shape, eyelid
The data such as position, canthus position, hot spot (also referred to as Purkinje image) position, myoelectricity stream, capacitance.
Optionally, the above-mentioned rendering method based on blinkpunkt can also be applied to augmented reality (Augmented
Reality, AR) or mixed reality technology (Mix Reality, MR).
Step S204 determines the blinkpunkt information of user according to eye feature data.
According to an optional embodiment of the application, which includes at least one of: user's watches attentively
The direction of visual lines of the position, user put.
In some optional embodiments of the application, obtain user's by eyeball tracking technology when executing step S204
Blinkpunkt information.Eyeball tracking, alternatively referred to as Eye-controlling focus are the sights that eyes are estimated by measurement eye motion situation
And/or the technology of blinkpunkt.Wherein, sight can be understood as a trivector, and blinkpunkt can be understood as above-mentioned three-dimensional
Two-dimensional coordinate of the vector project in some plane.
What is be widely used at present is optical recording: with camera or the eye motion situation of camera record subject,
The eyes image of reflection eye motion is obtained, and extracts eye feature for establishing view from the eyes image got
The model of line/watch attentively point estimation.Wherein, eye feature may include: pupil position, pupil shape, iris position, iris shape
Shape, eyelid position, canthus position, hot spot (also referred to as Purkinje image) position etc..
In optical recording, the eyeball tracking method of most mainstream is known as pupil-corneal reflection method at present.Other methods are also
It may include the method for being not based on eyes image, such as based on contact/contactless sensor (such as electrode, capacitance sensing
Device) calculate eyes movement.Wherein, pupil-corneal reflection method specifically includes the following steps:
(1) eyes image is obtained, eyes is pointed into using light source, eye is shot by image capture device, is mutually answered the bid
Reflection point of the light source on cornea i.e. hot spot (also referred to as Purkinje image) is taken the photograph, the eyes image for having hot spot is thus obtained.It needs
Illustrate, light source is generally infrared light supply, because infrared light will not influence the vision of eyes;It and can be multiple infrared
Light source arranges in a predetermined manner, such as isosceles triangle, linear type etc.;Image capture device includes but is not limited to that infrared photography is set
The equipment such as standby, infrared image sensor, camera or video camera.
In some optional embodiments of the application, MEMS (Micro-Electro- can also be passed through
Mechanical System, MEMS), Eye-controlling focus device (such as eye tracker) obtain eyes image.
(2) extract eye feature information, wherein eye feature may include: pupil position, pupil shape, iris position,
Iris shape, eyelid position, canthus position, facula position, myoelectricity stream, capacitance etc..
(3) calibration parameter is determined according to eye feature information, with the rotation of eyeball, the opposite position of pupil center and hot spot
It sets relationship to change therewith, corresponding collected several eyes images with hot spot reflect that such change in location is closed
System;Sight/watch point estimation attentively is carried out according to change in location relationship.In sight/blinkpunkt estimation procedure, in order to measure for regarding
Line/watch attentively in the model of point estimation certain undetermined parameters (also referred to as calibration parameter, correspond generally to user eyeball it is certain in
In parameter, such as eyeball radius etc.), common method is: allowing user to watch one or more target points attentively, it is assumed that target point
Information be known sight (because target point be preset), it is possible thereby to which counter solve above-mentioned calibration parameter.
(4) user's blinkpunkt information is obtained according to user's sight and calibration factor.
It should be noted that blinkpunkt information is including watching vector attentively, (watching area is to be revolved at a certain angle centered on it
The taper turned) or blinkpunkt coordinate (the usually intersection point of sight vector and object, wherein object includes actual object, empty
Quasi- object, display screen etc., watching area is then the circle or other shapes region centered on it) or watch depth attentively
The one or more of degree.
Display area is divided into different by step S206 according to display area blinkpunkt corresponding with blinkpunkt information
Show subregion.
In some embodiments of the present application, step S206 is realized by the following method: determination is multiple in display area
Show subregion;Edge according to multiple display subregions is clear at difference by multiple display sub-zone dividings at a distance from blinkpunkt
The display area of clear degree grade.
In some embodiments of the present application, above-mentioned display area can be the display screen of VR equipment.Be also possible to
The display screen of any one lower display terminal: mobile terminals, the computer such as AR equipment, MR equipment, mobile phone etc..
In the specific implementation, dividing in the display screen of display equipment has multiple display subregions, according to display sub-district
Distance of the domain away from blinkpunkt is by multiple display sub-zone dividing at the display area of different levels of sharpness.
Specifically, being a sub-regions, the region division slightly remote apart from blinkpunkt apart from the closer region division of blinkpunkt
It is another sub-regions apart from the farther region division of blinkpunkt for another sub-regions, and so on.The number of subregion,
Shape can be determined voluntarily according to demand, but subregion number is generally higher than two.
Step S208 renders the object in different display subregions.
Through the above steps, it by obtaining the blinkpunkt information of user, selects to render according to different display subregions
The rending model of object, the rendering objects positioned at blinkpunkt central area use high-precision rending model;It is look at a little non-central
The rendering objects in region use low precision rending model.The load for further decreasing figure rendering is achieved the purpose that, thus real
The technical effect that the performance that figure rendering is optimized when figure rendering is carried out using LOD technical object model is showed.
According to an optional embodiment of the application, the edge according to multiple display subregions will at a distance from blinkpunkt
It is multiple to show sub-zone dividings at the display area of different levels of sharpness, comprising: if distance is less than or equal to the first threshold
Display sub-zone dividing is the first display area by value;If distance is greater than first threshold and is less than or equal to second threshold,
It is the second display area by display sub-zone dividing;If distance is greater than second threshold, display sub-zone dividing is shown for third
Show region, wherein the levels of sharpness of the first display area, the second display area and third display area successively reduces.
It can be visual field (visual field refers to the maximum magnitude that human eye can be observed) center due to human visual system's central fovea
Clearer vision is provided, and the visual quality of peripheral vision is then relatively lower.It therefore, can be based on blinkpunkt information to user
Object in watching area is rendered.The display area of display device screen is divided into high-resolution according to blinkpunkt information
Region, secondary clear area, low clear area, high-resolution region are the display area in user's central region (i.e. apart from user
The nearest display area of blinkpunkt), secondary clear area is the slightly remote display area of the blinkpunkt apart from user, low clear area
For other remaining display areas in display screen, and the display area that blinkpunkt apart from user is farthest.Using above-mentioned aobvious
Show the method for region division, though from the closer object of user, if not in user's central region region, to the object into
When row rendering, the object can also be rendered with low precision rending model;For be located at high-resolution display area in object,
The object is rendered using high-precision rending model.
Fig. 3 be according to a kind of schematic diagram of the display screen of display equipment of the embodiment of the present application, as shown in figure 3, according to
The blinkpunkt information of user will show that screen is divided into high-resolution region (Hi Resolution), secondary clear area (Mid
), Resolution low circle of good definition (Low Resolution).Wherein, high-resolution region is that the blinkpunkt apart from user is nearest
Region, that is, the middle recessed area (Foveal) of human vision;Secondary clear area is the slightly remote region of the blinkpunkt apart from user
(Blend);Low circle of good definition is the farthest region of the blinkpunkt apart from user, that is, the peripheral region of user's visual field
(Peripheral).It is only the sub-fraction region in entire visual field due to watching central area attentively, thus is based on using above-mentioned
After user's blinkpunkt information divides on-screen display (osd) area, only rendered in high-resolution area using high-precision model, in energy
It is enough to reduce the visual experience that user is nor affected on while requirement rendering hardware.
In the specific implementation, a threshold value a and a threshold value b can be preset, if any one in display screen
The edge of display area is less than or equal to threshold value a at a distance from blinkpunkt, which is divided into high-resolution region;
If showing that the edge of any one display area in screen is greater than threshold value a at a distance from blinkpunkt, and it is less than or waits
In above-mentioned threshold value b, which is divided into time clear area;By other display area (i.e. sides of display area in screen
Edge is greater than the region of threshold value b at a distance from blinkpunkt) it is divided into low clear area.
Fig. 4 a is to carry out display area division to display screen using blinkpunkt information according to a kind of of the embodiment of the present application
Schematic diagram using blinkpunkt as the center of circle, determine two border circular areas, wherein the radius of the second border circular areas is big as shown in fig. 4 a
In the radius of the first border circular areas, which, which is that the edge of the display area where object is corresponding with blinkpunkt information, watches attentively
The distance of point;Using the first border circular areas as the first display area;By the edge of the first border circular areas and the second border circular areas
The annular region of edge composition is as the second display area;Will in display area except the first display area and the second display area with
Outer display area is as third display area, wherein the first display area, the second display area and third display area it is clear
Clear degree grade successively reduces.
It should be noted that the present embodiment is the method for illustrating how to divide display area by taking circle as an example, but of the invention
It including but not limited to uses round as division shape, other ellipse, regular polygon, diamond shape, equilateral triangle, hearts etc.
Figure all can serve as to divide figure.
Fig. 4 b is to carry out display area to display screen using blinkpunkt information according to the another kind of the embodiment of the present application to draw
The schematic diagram divided, as shown in Figure 4 b, when being divided with rectangle to display area, using blinkpunkt as the cornerwise friendship of rectangle
Point determines two rectangular areas, wherein cornerwise length of the second rectangular area is greater than the diagonal line of the first rectangular area,
The edge of display area of the cornerwise half where object is with blinkpunkt information at a distance from corresponding blinkpunkt;By
One rectangular area is as the first display area;The annular that the edge of the edge of first rectangular area and the second rectangular area is formed
Region is as the second display area;Display area in display area in addition to the first display area and the second display area is made
For third display area, wherein the levels of sharpness of the first display area, the second display area and third display area successively drops
It is low.It should be noted that the quantity of division is not limited to three of diagram when dividing to display area, it can be according to user
Demand set.
In addition, when being divided with ellipse to display area, using blinkpunkt as the center of circle of ellipse;With polygon pair
When display area is divided, using blinkpunkt as the intersection point of the diagonals of polygon, specific division methods may refer to Fig. 4 a
With the associated description of the illustrated embodiment in Fig. 4 b, details are not described herein again.
By the above method, when being rendered using LOD technology to object model, it can further decrease and need to render
Model face number and number of vertex, can mitigate figure rendering pressure, can preferably optimize figure rendering performance.For
This technology more demanding to frame number of VR, the processing capacity for saving VR equipment is most important, particularly important to VR all-in-one machine.
In some embodiments of the present application, when step S208, can be realized by the following method: to levels of sharpness height
Display subregion in object when being rendered face number needed for building rending model and points more than to levels of sharpness
Face number and points needed for constructing rending model when object in low display subregion is rendered, rending model are to mesh
The model that mark object needs to construct when being rendered.
By taking Fig. 4 a or Fig. 4 b as an example, when being rendered to the object in the first display area, needed when constructing rending model
When the object that the points and face number wanted compare in the second display area is rendered, the points that are needed when constructing rending model and
Face number is more;Equally, when being rendered to the object in the second display area, the points and face number that are needed when constructing rending model
When object in comparison third display area is rendered, the points and face number needed when constructing rending model are more.Exist in this way
While meeting user's visual experience, the requirement to rendering hardware can be further decreased, realizes the property of optimization figure rendering
The technical effect of energy.
Fig. 5 is according to a kind of structure chart of rendering device based on blinkpunkt information of the embodiment of the present application, such as Fig. 5 institute
Show, which includes:
Module 50 is obtained, for obtaining the eye feature data of user.
Determining module 52, for determining the blinkpunkt information of user according to eye feature data.
Division module 54, for according to display area blinkpunkt corresponding with blinkpunkt information, display area to be divided into
Different display subregions.
In some embodiments of the present application, division module 54 includes: the first determination unit, for true in display area
Fixed multiple display subregions;Division unit, will be multiple aobvious at a distance from blinkpunkt for the edge according to multiple display subregions
Show sub-zone dividing at the display area of different levels of sharpness.
According to an optional embodiment of the application, above-mentioned division unit includes: the first division subelement, at this
It is the first display area by display sub-zone dividing when distance is less than or equal to first threshold;Second divides subelement, is used for
It is the second display area by display sub-zone dividing when the distance is greater than first threshold and is less than or equal to second threshold;
Third divides subelement, for being third display area by display sub-zone dividing when the distance is greater than second threshold;Its
In, the levels of sharpness of the first display area, the second display area and third display area successively reduces.
According to an optional embodiment of the application, above-mentioned division module 54 further include: the second determination unit, for
Blinkpunkt is the center of circle, determines two border circular areas, wherein the radius of the second border circular areas is greater than the radius of the first border circular areas,
The edge of display area of the radius where object is with blinkpunkt information at a distance from corresponding blinkpunkt;First setting is single
Member, for using the first border circular areas as the first display area;Second setting unit, for by the edge of the first border circular areas with
The annular region of the edge composition of second border circular areas is as the second display area;Third setting unit, for showing target
Display area in region in addition to the first display area and the second display area is as third display area;Wherein, first is aobvious
Show that the levels of sharpness in region, the second display area and third display area successively reduces.
Rendering module 56, for being rendered to the object in different display subregions.
It should be noted that the correlation that the preferred embodiment of embodiment illustrated in fig. 5 may refer to embodiment illustrated in fig. 2 is retouched
It states, details are not described herein again.
Fig. 6 according to the structure chart of rendering system based on blinkpunkt information of the embodiment of the present application a kind of, as shown in fig. 6,
The system includes:
It shows equipment 60, is used for displaying target object.
The rendering method based on blinkpunkt information that the above embodiments of the present application provide can be applied to the technologies such as VR, AR, MR,
Display equipment 60 can be VR equipment, AR equipment, MR equipment;It is also possible to the mobile terminal devices such as mobile phone, plate;Either electricity
Brain.Display equipment 60 is mainly responsible for screen presentation.
Eye movement tracing equipment 62, for obtaining the eye feature data of user;Determine user's according to eye feature data
Blinkpunkt information;The blinkpunkt information of user is sent to processor.
Eye movement tracing equipment 62 is acquisition eye image and carries out real-time analysis processing to image, to obtain eye gaze
The device of point information.It can be plug-in type, also can integrate in display equipment 60.
Processor 64 is communicated to connect with eye movement tracing equipment, for according to display area note corresponding with blinkpunkt information
Display area is divided into different display subregions by viewpoint;Object in different display subregions is rendered.
Processor 64 is used for runs software program, and the blinkpunkt that software program is used to be obtained according to eye movement tracing equipment 62 is believed
The display screen for showing equipment 60 is divided into the display area of different display resolution grades by breath.User's central region will be located at
Display area be divided into high-resolution region, select high-precision model when rendering to the object in high-resolution region;By position
In the display area of the non-central region of user be divided into time clear area and low cleaning area with, for be located at time clear area and
When object in low clear area is rendered, the model of low precision is selected to be rendered.
It should be noted that processor 64, which can be, is arranged in the local processor of display equipment 60, it is also possible to be arranged
Server beyond the clouds, herein with no restrictions.
The preferred embodiment of embodiment illustrated in fig. 6 may refer to the associated description of embodiment illustrated in fig. 2, no longer superfluous herein
It states.
Fig. 7 is the structure chart according to a kind of display equipment of the embodiment of the present application as shown in fig. 7, the display equipment, comprising:
Screen 70 is shown, for showing object.
Processor 72, for obtaining the eye feature data of user;The blinkpunkt of user is determined according to eye feature data
Information;According to display area blinkpunkt corresponding with blinkpunkt information, destination display area is divided into different display sub-districts
Domain;Object in different display subregions is rendered.
The rendering method based on blinkpunkt information that the above embodiments of the present application provide can be applied to the technologies such as VR, AR, MR,
Therefore, which can be VR equipment, AR equipment, MR equipment;It is also possible to the mobile terminal devices such as mobile phone, plate;Or
Person is computer.
It should be noted that the correlation that the preferred embodiment of embodiment illustrated in fig. 7 may refer to embodiment illustrated in fig. 2 is retouched
It states, details are not described herein again.
The embodiment of the present application also provides a kind of storage medium, storage medium includes the program of storage, wherein program operation
When control storage medium where equipment execute above rendering method based on blinkpunkt information.
Storage medium is used to store the program for executing following functions: obtaining the eye feature data of user;It is special according to eye
Sign data determine the blinkpunkt information of user;According to display area blinkpunkt corresponding with blinkpunkt information, display area is drawn
It is divided into different display subregions;Object in different display subregions is rendered.
The embodiment of the present application in another aspect, additionally provide a kind of processor, processor is for running program, wherein journey
Above rendering method based on blinkpunkt information is executed when sort run.
Processor is used to run the program for executing following functions: obtaining the eye feature data of user;According to eye feature
Data determine the blinkpunkt information of user;According to mesh display area blinkpunkt corresponding with blinkpunkt information, display area is drawn
It is divided into different display subregions;Object in different display subregions is rendered.
Above-mentioned the embodiment of the present application serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
In above-described embodiment of the application, all emphasizes particularly on different fields to the description of each embodiment, do not have in some embodiment
The part of detailed description, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others
Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, such as the division of the unit, Ke Yiwei
A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of unit or module
It connects, can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
On unit.It can some or all of the units may be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can for personal computer, server or network equipment etc.) execute each embodiment the method for the application whole or
Part steps.And storage medium above-mentioned includes: that USB flash disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic or disk etc. be various to can store program code
Medium.
The above is only the preferred embodiment of the application, it is noted that for the ordinary skill people of the art
For member, under the premise of not departing from the application principle, several improvements and modifications can also be made, these improvements and modifications are also answered
It is considered as the protection scope of the application.
Claims (10)
1. a kind of rendering method based on blinkpunkt information characterized by comprising
Obtain the eye feature data of user;
The blinkpunkt information of the user is determined according to the eye feature data;
According to the blinkpunkt information, display area is divided into the different display subregion of levels of sharpness;
Object in the different display subregion is rendered.
2. the method according to claim 1, wherein the display area is drawn according to the blinkpunkt information
It is divided into the different display subregion of levels of sharpness, comprising:
Multiple display subregions are determined in the display area;
Edge according to the multiple display subregion determines the clarity of multiple display subregions at a distance from the blinkpunkt
Grade.
3. according to the method described in claim 2, it is characterized in that, according to the multiple edge and the note for showing subregion
The distance of viewpoint determines the levels of sharpness of multiple display subregions, comprising:
It is the first display area by the display sub-zone dividing if the distance is less than or equal to first threshold;
It is second by the display sub-zone dividing if the distance is greater than first threshold and is less than or equal to second threshold
Display area;
It is third display area by the display sub-zone dividing, wherein described first if the distance is greater than second threshold
The levels of sharpness of display area, second display area and the third display area successively reduces.
4. according to the method described in claim 2, it is characterized in that, according to the multiple edge and the note for showing subregion
The distance of viewpoint determines the levels of sharpness of multiple display subregions, comprising:
Using the blinkpunkt as the center of circle, two border circular areas are determined, wherein the radius of the second border circular areas is greater than the first circle
The radius in domain, the radius be object where display area edge blinkpunkt corresponding with the blinkpunkt information away from
From;
Using first border circular areas as the first display area;By the edge of first border circular areas and second circle
The annular region of the edge composition in region is as the second display area;Will in the display area except first display area and
Display area other than second display area is as third display area, wherein first display area, described second
The levels of sharpness of display area and the third display area successively reduces.
5. the method according to claim 1, wherein the blinkpunkt information includes at least one of: described
The position of the blinkpunkt of user, the user direction of visual lines.
6. method as claimed in any of claims 1 to 5, which is characterized in that in the different display subregions
Object rendered, comprising:
Face number and point needed for constructing rending model when object in the display subregion high to levels of sharpness renders
When number is rendered more than the objects in the display subregion low to levels of sharpness face number needed for building rending model with
Points, the rending model are the model for needing to construct when rendering the object.
7. a kind of rendering device based on blinkpunkt information characterized by comprising
Module is obtained, for obtaining the eye feature data of user;
Determining module, for determining the blinkpunkt information of the user according to the eye feature data;
Division module, for according to display area blinkpunkt corresponding with the blinkpunkt information, the display area to be divided
At different display subregions;
Rendering module, for being rendered to the object in the different display subregion.
8. a kind of rendering system based on blinkpunkt information characterized by comprising
It shows equipment, is used for displaying target object;
Eye movement tracing equipment, for obtaining the eye feature data of user;The user is determined according to the eye feature data
Blinkpunkt information;The blinkpunkt information of the user is sent to processor;
The processor is communicated to connect with the eye movement tracing equipment, for according to display area and the blinkpunkt information pair
The display area is divided into different display subregions by the blinkpunkt answered;To the mesh in the different display subregion
Mark object is rendered.
9. a kind of display equipment characterized by comprising
Screen is shown, for showing object;
Processor, for obtaining the eye feature data of user;Watching attentively for the user is determined according to the eye feature data
Point information;According to the corresponding blinkpunkt of blinkpunkt information described in display area, the display area is divided into different displays
Subregion;Object in the different display subregion is rendered.
10. a kind of storage medium, which is characterized in that the storage medium includes the program of storage, wherein when described program is run
Control the rendering side based on blinkpunkt information described in any one of equipment perform claim requirement 1 to 6 where storage medium
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910663137.8A CN110378914A (en) | 2019-07-22 | 2019-07-22 | Rendering method and device, system, display equipment based on blinkpunkt information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910663137.8A CN110378914A (en) | 2019-07-22 | 2019-07-22 | Rendering method and device, system, display equipment based on blinkpunkt information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110378914A true CN110378914A (en) | 2019-10-25 |
Family
ID=68254852
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910663137.8A Withdrawn CN110378914A (en) | 2019-07-22 | 2019-07-22 | Rendering method and device, system, display equipment based on blinkpunkt information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110378914A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111338591A (en) * | 2020-02-25 | 2020-06-26 | 京东方科技集团股份有限公司 | Virtual reality display equipment and display method |
CN112634461A (en) * | 2020-12-18 | 2021-04-09 | 上海影创信息科技有限公司 | Method and system for enhancing reality of afterglow area |
CN113223183A (en) * | 2021-04-30 | 2021-08-06 | 杭州小派智能科技有限公司 | Rendering method and system based on existing VR (virtual reality) content |
CN113362450A (en) * | 2021-06-02 | 2021-09-07 | 聚好看科技股份有限公司 | Three-dimensional reconstruction method, device and system |
CN113538698A (en) * | 2020-04-16 | 2021-10-22 | 同济大学 | Model display device and model display method |
CN113709375A (en) * | 2021-09-06 | 2021-11-26 | 维沃移动通信有限公司 | Image display method and device and electronic equipment |
CN114077465A (en) * | 2020-08-10 | 2022-02-22 | Oppo广东移动通信有限公司 | UI (user interface) rendering method and device, electronic equipment and storage medium |
WO2022133683A1 (en) * | 2020-12-21 | 2022-06-30 | 京东方科技集团股份有限公司 | Mixed reality display method, mixed reality device, and storage medium |
CN114972608A (en) * | 2022-07-29 | 2022-08-30 | 成都航空职业技术学院 | Method for rendering cartoon character |
CN116597288A (en) * | 2023-07-18 | 2023-08-15 | 江西格如灵科技股份有限公司 | Gaze point rendering method, gaze point rendering system, computer and readable storage medium |
CN117372656A (en) * | 2023-09-25 | 2024-01-09 | 广东工业大学 | User interface display method, device and medium for mixed reality |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106101533A (en) * | 2016-06-15 | 2016-11-09 | 努比亚技术有限公司 | Render control method, device and mobile terminal |
CN106484116A (en) * | 2016-10-19 | 2017-03-08 | 腾讯科技(深圳)有限公司 | The treating method and apparatus of media file |
CN107153519A (en) * | 2017-04-28 | 2017-09-12 | 北京七鑫易维信息技术有限公司 | Image transfer method, method for displaying image and image processing apparatus |
CN107168668A (en) * | 2017-04-28 | 2017-09-15 | 北京七鑫易维信息技术有限公司 | Image data transfer method, device and storage medium, processor |
US20170302918A1 (en) * | 2016-04-15 | 2017-10-19 | Advanced Micro Devices, Inc. | Efficient streaming of virtual reality content |
CN107516335A (en) * | 2017-08-14 | 2017-12-26 | 歌尔股份有限公司 | The method for rendering graph and device of virtual reality |
CN107967707A (en) * | 2016-10-18 | 2018-04-27 | 三星电子株式会社 | For handling the apparatus and method of image |
CN108919958A (en) * | 2018-07-16 | 2018-11-30 | 北京七鑫易维信息技术有限公司 | A kind of image transfer method, device, terminal device and storage medium |
US20180357752A1 (en) * | 2017-06-09 | 2018-12-13 | Sony Interactive Entertainment Inc. | Foveal Adaptation of Temporal Anti-Aliasing |
CN109087260A (en) * | 2018-08-01 | 2018-12-25 | 北京七鑫易维信息技术有限公司 | A kind of image processing method and device |
CN109242943A (en) * | 2018-08-21 | 2019-01-18 | 腾讯科技(深圳)有限公司 | A kind of image rendering method, device and image processing equipment, storage medium |
CN109388448A (en) * | 2017-08-09 | 2019-02-26 | 京东方科技集团股份有限公司 | Image display method, display system and computer readable storage medium |
CN109727305A (en) * | 2019-01-02 | 2019-05-07 | 京东方科技集团股份有限公司 | Virtual reality system picture processing method, device and storage medium |
CN109766011A (en) * | 2019-01-16 | 2019-05-17 | 北京七鑫易维信息技术有限公司 | A kind of image rendering method and device |
CN109801353A (en) * | 2019-01-16 | 2019-05-24 | 北京七鑫易维信息技术有限公司 | A kind of method of image rendering, server and terminal |
-
2019
- 2019-07-22 CN CN201910663137.8A patent/CN110378914A/en not_active Withdrawn
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170302918A1 (en) * | 2016-04-15 | 2017-10-19 | Advanced Micro Devices, Inc. | Efficient streaming of virtual reality content |
CN106101533A (en) * | 2016-06-15 | 2016-11-09 | 努比亚技术有限公司 | Render control method, device and mobile terminal |
CN107967707A (en) * | 2016-10-18 | 2018-04-27 | 三星电子株式会社 | For handling the apparatus and method of image |
CN106484116A (en) * | 2016-10-19 | 2017-03-08 | 腾讯科技(深圳)有限公司 | The treating method and apparatus of media file |
CN107168668A (en) * | 2017-04-28 | 2017-09-15 | 北京七鑫易维信息技术有限公司 | Image data transfer method, device and storage medium, processor |
CN107153519A (en) * | 2017-04-28 | 2017-09-12 | 北京七鑫易维信息技术有限公司 | Image transfer method, method for displaying image and image processing apparatus |
US20180357752A1 (en) * | 2017-06-09 | 2018-12-13 | Sony Interactive Entertainment Inc. | Foveal Adaptation of Temporal Anti-Aliasing |
CN109388448A (en) * | 2017-08-09 | 2019-02-26 | 京东方科技集团股份有限公司 | Image display method, display system and computer readable storage medium |
CN107516335A (en) * | 2017-08-14 | 2017-12-26 | 歌尔股份有限公司 | The method for rendering graph and device of virtual reality |
CN108919958A (en) * | 2018-07-16 | 2018-11-30 | 北京七鑫易维信息技术有限公司 | A kind of image transfer method, device, terminal device and storage medium |
CN109087260A (en) * | 2018-08-01 | 2018-12-25 | 北京七鑫易维信息技术有限公司 | A kind of image processing method and device |
CN109242943A (en) * | 2018-08-21 | 2019-01-18 | 腾讯科技(深圳)有限公司 | A kind of image rendering method, device and image processing equipment, storage medium |
CN109727305A (en) * | 2019-01-02 | 2019-05-07 | 京东方科技集团股份有限公司 | Virtual reality system picture processing method, device and storage medium |
CN109766011A (en) * | 2019-01-16 | 2019-05-17 | 北京七鑫易维信息技术有限公司 | A kind of image rendering method and device |
CN109801353A (en) * | 2019-01-16 | 2019-05-24 | 北京七鑫易维信息技术有限公司 | A kind of method of image rendering, server and terminal |
Non-Patent Citations (1)
Title |
---|
刘贤梅 等: "《虚拟现实技术及其应用》", 31 May 2004, 哈尔滨地图出版社 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11804194B2 (en) | 2020-02-25 | 2023-10-31 | Beijing Boe Optoelectronics Technology Co., Ltd. | Virtual reality display device and display method |
CN111338591A (en) * | 2020-02-25 | 2020-06-26 | 京东方科技集团股份有限公司 | Virtual reality display equipment and display method |
CN111338591B (en) * | 2020-02-25 | 2022-04-12 | 京东方科技集团股份有限公司 | Virtual reality display equipment and display method |
CN113538698A (en) * | 2020-04-16 | 2021-10-22 | 同济大学 | Model display device and model display method |
CN114077465A (en) * | 2020-08-10 | 2022-02-22 | Oppo广东移动通信有限公司 | UI (user interface) rendering method and device, electronic equipment and storage medium |
CN112634461A (en) * | 2020-12-18 | 2021-04-09 | 上海影创信息科技有限公司 | Method and system for enhancing reality of afterglow area |
WO2022133683A1 (en) * | 2020-12-21 | 2022-06-30 | 京东方科技集团股份有限公司 | Mixed reality display method, mixed reality device, and storage medium |
CN113223183B (en) * | 2021-04-30 | 2023-03-10 | 杭州小派智能科技有限公司 | Rendering method and system based on existing VR content |
CN113223183A (en) * | 2021-04-30 | 2021-08-06 | 杭州小派智能科技有限公司 | Rendering method and system based on existing VR (virtual reality) content |
CN113362450A (en) * | 2021-06-02 | 2021-09-07 | 聚好看科技股份有限公司 | Three-dimensional reconstruction method, device and system |
CN113709375A (en) * | 2021-09-06 | 2021-11-26 | 维沃移动通信有限公司 | Image display method and device and electronic equipment |
CN113709375B (en) * | 2021-09-06 | 2023-07-11 | 维沃移动通信有限公司 | Image display method and device and electronic equipment |
CN114972608A (en) * | 2022-07-29 | 2022-08-30 | 成都航空职业技术学院 | Method for rendering cartoon character |
CN114972608B (en) * | 2022-07-29 | 2022-11-08 | 成都航空职业技术学院 | Method for rendering cartoon characters |
CN116597288A (en) * | 2023-07-18 | 2023-08-15 | 江西格如灵科技股份有限公司 | Gaze point rendering method, gaze point rendering system, computer and readable storage medium |
CN116597288B (en) * | 2023-07-18 | 2023-09-12 | 江西格如灵科技股份有限公司 | Gaze point rendering method, gaze point rendering system, computer and readable storage medium |
CN117372656A (en) * | 2023-09-25 | 2024-01-09 | 广东工业大学 | User interface display method, device and medium for mixed reality |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110378914A (en) | Rendering method and device, system, display equipment based on blinkpunkt information | |
US10739849B2 (en) | Selective peripheral vision filtering in a foveated rendering system | |
US11836289B2 (en) | Use of eye tracking to adjust region-of-interest (ROI) for compressing images for transmission | |
US10720128B2 (en) | Real-time user adaptive foveated rendering | |
CN109086726B (en) | Local image identification method and system based on AR intelligent glasses | |
US10372205B2 (en) | Reducing rendering computation and power consumption by detecting saccades and blinks | |
CN106959759B (en) | Data processing method and device | |
US9842246B2 (en) | Fitting glasses frames to a user | |
CN106325510B (en) | Information processing method and electronic equipment | |
Nitschke et al. | Corneal imaging revisited: An overview of corneal reflection analysis and applications | |
CN109766011A (en) | A kind of image rendering method and device | |
US11663689B2 (en) | Foveated rendering using eye motion | |
CN104408764A (en) | Method, device and system for trying on glasses in virtual mode | |
CN104036169B (en) | Biological authentication method and biological authentication apparatus | |
US11238651B2 (en) | Fast hand meshing for dynamic occlusion | |
CN109246463A (en) | Method and apparatus for showing barrage | |
CN109002164A (en) | It wears the display methods for showing equipment, device and wears display equipment | |
CN109117779A (en) | One kind, which is worn, takes recommended method, device and electronic equipment | |
CN113467619B (en) | Picture display method and device, storage medium and electronic equipment | |
CN107203270A (en) | VR image processing methods and device | |
CN105763829A (en) | Image processing method and electronic device | |
CN107908278A (en) | A kind of method and apparatus of Virtual Reality interface generation | |
CN106851249A (en) | Image processing method and display device | |
CN112183160A (en) | Sight estimation method and device | |
CN111767817A (en) | Clothing matching method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20191025 |
|
WW01 | Invention patent application withdrawn after publication |