WO2019206187A1 - 几何量测量方法及其装置、增强现实设备和存储介质 - Google Patents

几何量测量方法及其装置、增强现实设备和存储介质 Download PDF

Info

Publication number
WO2019206187A1
WO2019206187A1 PCT/CN2019/084110 CN2019084110W WO2019206187A1 WO 2019206187 A1 WO2019206187 A1 WO 2019206187A1 CN 2019084110 W CN2019084110 W CN 2019084110W WO 2019206187 A1 WO2019206187 A1 WO 2019206187A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
convergence point
user
right eye
geometric
Prior art date
Application number
PCT/CN2019/084110
Other languages
English (en)
French (fr)
Inventor
李佃蒙
魏伟
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/498,822 priority Critical patent/US11385710B2/en
Publication of WO2019206187A1 publication Critical patent/WO2019206187A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • At least one embodiment of the present disclosure is directed to a geometric quantity measuring method and apparatus therefor, an augmented reality device, and a storage medium.
  • Augmented Reality is a new technology that integrates real-world information and virtual information. It is characterized by applying virtual information to the real environment and can bring physical and virtual information in the real environment. Blend into the same picture or space to achieve a sensory experience that transcends reality.
  • the existing virtual reality system mainly simulates a virtual three-dimensional world through a high-performance computing system with a central processing unit, and provides the user with a sensory experience of sight, hearing, etc., so that the user is as immersive as the immersive Human-computer interaction is also possible.
  • At least one embodiment of the present disclosure provides a method for measuring a geometric quantity, comprising: acquiring left and right eye images when a user looks at a target object, and determining left and right eyes based on the acquired left and right eye images when the user looks at the target object. a line of sight; determining a convergence point of the left and right eye lines of sight according to the left and right eyesight lines; and calculating a geometric parameter of the target object when the convergence point coincides with a desired position on the target object.
  • the geometric quantity measuring method provided by at least one embodiment of the present disclosure further includes: marking a position of the convergence point when a dwell time of the convergence point is greater than a preset threshold.
  • the geometric parameter of the target object is calculated.
  • determining the left and right eye sights based on the acquired left and right eye images when the user looks at the target object includes: based on the acquired user gaze
  • the left and right eye images at the time of the target object determine the left and right eye pupil positions and the center positions of the left and right eyes; the left and right eye lines of sight are determined according to the left and right eye pupil positions and the center positions of the left and right eyes.
  • calculating the geometric parameter of the target object includes calculating the convergence when the position of the convergence point coincides with a desired position on the target object The distance between the location of the point and the user; the line of sight deflection based on the calculated plurality of distances of the location of the plurality of convergence points and the user and the different desired positions on the target object Angle, determining the geometric parameters of the target object.
  • calculating the distance between the position of the convergence point and the user includes: according to the left and right eye line of sight and the center position passing through the left and right eyes The remaining angle of the angle between the straight lines and the straight line distance of the center positions of the left and right eyes calculate the distance between the position of the convergence point and the user.
  • the plurality of distances between the positions of the plurality of convergence points and the user include a desired position on the first edge on the target object. a first distance between the location of the coincident convergence point and the user, and a second distance between the location of the convergence point that coincides with the desired location on the second edge on the target object and the user;
  • the first edge is an edge of the target object that is disposed opposite the second edge.
  • a line of sight deflection angle when the user looks at different desired positions on the target object is between the first distance and the second distance angle.
  • the geometric parameters of the target object include: a height of the target object, a width of the target object, or a thickness of the target object, and the like.
  • the target object is calculated. Geometric parameters.
  • the geometric quantity measuring method is used for an augmented reality device.
  • At least one embodiment of the present disclosure also provides a geometric quantity measuring apparatus, including: a line of sight determining unit, a convergence point determining unit, and a geometric parameter calculating unit.
  • the line-of-sight determining unit is configured to acquire left and right eye images when the user looks at the target object, and determine left and right eye sights based on the acquired left and right eye images when the user looks at the target object;
  • the convergence point determining unit is configured to The left and right eyesight lines determine a convergence point of the left and right eyesight lines;
  • the geometric parameter calculation unit is configured to calculate a geometric parameter of the target object when the convergence point coincides with a desired position on the target object.
  • the geometric quantity measuring apparatus further includes: a position marking unit of the convergence point.
  • the location marking unit of the convergence point is configured to mark a location of the convergence point when a residence time of the convergence point is greater than a preset threshold;
  • the geometric parameter calculation unit is configured to: when the location of the marked convergence point is When the desired positions on the target object coincide, the geometric parameters of the target object are calculated.
  • the line of sight determining unit includes: a pupil position and a center position determining subunit and a line of sight acquiring subunit of the left and right eyes.
  • the pupil position and the center position determining sub-units of the left and right eyes are configured to determine the left and right eye pupil positions and the center positions of the left and right eyes based on the acquired left and right eye images when the user looks at the target object;
  • the sight line acquisition subunit is configured according to the left and right eyes The pupil position and the center position of the left and right eyes determine the left and right eyesight.
  • the geometric parameter calculating unit includes: a distance calculating subunit and a geometric parameter calculating subunit.
  • the distance calculation subunit is configured to calculate a distance between a location of the convergence point and the user when a location of the marked convergence point coincides with a desired location on the target object;
  • the geometric parameter calculation subunit is configured as The geometric parameters of the target object are determined based on the calculated plurality of distances of the locations of the convergence points and the plurality of distances between the users and the angle of view of the line of sight when the user looks at different desired positions on the target object.
  • the distance calculation subunit is configured to be a complementary angle according to an angle between the left and right eye line of sight and a straight line passing through the center position of the left and right eyes. And a linear distance between the center positions of the left and right eyes and a distance between the location of the convergence point and the user.
  • At least one embodiment of the present disclosure also provides a geometric quantity measuring apparatus comprising: a processor; a machine readable storage medium storing one or more computer program modules; the one or more computer program modules being stored in the The machine readable storage medium is configured to be executed by the processor, the one or more computer program modules comprising instructions for performing a geometric quantity measurement method provided by any of the embodiments of the present disclosure.
  • At least one embodiment of the present disclosure also provides an augmented reality device, including the geometric quantity measuring device provided by any embodiment of the present disclosure.
  • At least one embodiment of the present disclosure also provides a storage medium storing computer readable instructions non-transitoryly, and when the non-transitory stored computer readable instructions are executed by a computer, can be executed in accordance with any embodiment of the present disclosure.
  • the instruction of the geometric quantity measurement method is not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, the present disclosure.
  • FIG. 1 is a flow chart of a method for measuring a geometric quantity according to at least one embodiment of the present disclosure
  • FIG. 2A is a flowchart of another method for measuring a geometric quantity according to at least one embodiment of the present disclosure
  • Figure 2B is a flow chart of step S40 shown in Figure 1 or Figure 2A;
  • FIG. 3 is a schematic diagram showing a relationship between a left and right eye line of sight and a desired position on a target object according to at least one embodiment of the present disclosure
  • FIG. 4 is a schematic diagram showing another relationship between a left and right eye line of sight and a desired position on a target object according to at least one embodiment of the present disclosure
  • FIG. 5A is a schematic block diagram of a geometric quantity measuring apparatus according to at least one embodiment of the present disclosure.
  • Figure 5B is a schematic block diagram of the line of sight determining unit shown in Figure 5A;
  • Figure 5C is a schematic block diagram of the geometric parameter calculation unit shown in Figure 5A;
  • FIG. 5D is a schematic block diagram of another geometric quantity measuring apparatus according to at least one embodiment of the present disclosure.
  • FIG. 5E is a schematic block diagram of an augmented reality device according to at least one embodiment of the present disclosure.
  • FIG. 5F is a schematic diagram of an augmented reality device according to at least one embodiment of the present disclosure.
  • FIG. 6 is a hardware structural diagram of an augmented reality device according to at least one embodiment of the present disclosure.
  • FIG. 7 is a schematic block diagram of a storage medium according to at least one embodiment of the present disclosure.
  • the augmented reality AR device may have a tracking function of the human eye, and there is still a large expandable space for augmenting the function of the real device in this respect.
  • At least one embodiment of the present disclosure provides a method for measuring a geometric quantity, comprising: acquiring left and right eye images when a user looks at a target object, and determining left and right eyes based on the acquired left and right eye images when the user looks at the target object. a line of sight; determining a convergence point of the left and right eye lines of sight according to the left and right eyesight lines; and calculating a geometric parameter of the target object when the convergence point coincides with a desired position on the target object.
  • At least one embodiment of the present disclosure also provides a geometric quantity measuring device, an enhanced display device, and a storage medium corresponding to the above geometric quantity measuring method.
  • the geometric quantity measuring method provided by the above embodiments of the present disclosure can determine the distance between the desired position on the target object and the user according to the image of the left and right eyes of the user, to obtain the geometric parameters of the target object, thereby expanding the function of the AR device. Increased user experience of AR devices.
  • At least one embodiment of the present disclosure provides a method for measuring a geometric quantity, which can be used for an AR device or a VR (Virtual Reality, VR for short) device.
  • the embodiment of the present disclosure does not limit this, so that the function of the AR/VR device can be further extended. Improve the user experience of AR/VR devices.
  • the following uses the geometric quantity measurement method for the AR device as an example for description.
  • the geometric quantity measurement method can be implemented at least partially in software and loaded and executed by a processor in the AR device, or at least partially implemented in hardware or firmware, to extend the functionality of the AR device, and to improve the AR device. user experience.
  • FIG. 1 is a flow chart of a method for measuring a geometric quantity provided by at least one embodiment of the present disclosure.
  • the geometric quantity measuring method includes step S10, step S20, and step S40; in other examples, the geometric quantity measuring method further includes step S30.
  • the steps S10 to S40 of the geometric quantity measuring method and their respective exemplary implementations are respectively described below.
  • Step S10 Acquire the left and right eye images when the user looks at the target object, and determine the left and right eye sights based on the acquired left and right eye images when the user looks at the target object.
  • Step S20 Determine the convergence point of the left and right eyesight lines according to the direction of the left and right eyesight directions.
  • Step S30 Mark the location of the convergence point when the staying time of the convergence point is greater than the threshold.
  • Step S40 Calculate the distance between the location of the convergence point and the user when the convergence point coincides with the desired position on the target object.
  • the user can observe the light incident into the field of view of the user's eyes after wearing the AR device, the light can be reflected by the eyes, and the light reflected by the two eyes (including the left eye and the right eye) can be photographed.
  • the device or some specialized optical sensor is received, from which the left and right eye images can be acquired.
  • the image pickup device may include a CMOS (Complementary Metal Oxide Semiconductor) sensor, a CCD (Charge Coupled Device) sensor, an infrared camera, or the like.
  • the camera device can be placed in the plane in which the OLED display is located, such as on the bezel of the AR device.
  • the left and right eye features can be acquired by performing image recognition and feature extraction on the left and right eye images.
  • the eye features may include the pupil center of the eye, the pupil size, the corneal reflection information, the iris center, the iris size, and the like, and further the operation processing according to the eye features may determine the left and right eyesight lines, including the left eye line of sight and the right eye line of sight.
  • the line of sight refers to the line of the eye and the position on the target object when the eye is looking at a certain position on the target object when the user observes the target object.
  • a large number (for example, 10,000 sheets or more) of images including left and right eyes may be collected in advance as a sample library, and feature extraction is performed on the images in the sample library. Then, using the images in the sample library and the extracted feature points, the classification model is trained and tested by machine learning (such as deep learning, or local feature-based regression algorithm) to obtain the classification of the left and right eye images of the user. model.
  • the classification model may also be implemented by other conventional algorithms in the art, such as a support vector machine (SVM), etc., which is not limited by the embodiments of the present disclosure.
  • SVM support vector machine
  • the machine learning algorithm can be implemented by using a conventional method in the art, and details are not described herein again.
  • the input of the classification model is an acquired image
  • the output is the left and right eye images of the user, so that image recognition can be realized.
  • the extraction of eye feature points may employ a Scale-invariant Feature Transform (SIFT) feature extraction algorithm, a Histogram of Oriented Gradient (HOG) feature extraction algorithm, and other routines in the art.
  • SIFT Scale-invariant Feature Transform
  • HOG Histogram of Oriented Gradient
  • the algorithm implementation is not limited by the embodiments of the present disclosure.
  • step S20 for example, when the two eyes are looking at a certain position of the target object, the left eye line of sight and the right eye line of sight converge at the position, and the concentrated position is referred to as a convergence point in various embodiments of the present disclosure.
  • the actual object in the real environment within the field of view and the virtual information projected to the user's eyes can be observed.
  • the virtual information and the actual object can be merged into the same picture or space to achieve beyond reality. Sensory experience.
  • the geometric parameters of the target object include: the height of the target object, the width of the target object, or the thickness of the target object, and the like, which is not limited by the embodiment of the present disclosure.
  • the target object in at least one embodiment of the present disclosure refers to the actual object in the real environment observed by the user, if the user looks at the target object A certain location takes a long time, the user may be interested in the target object, and want to know more about the target object.
  • the residence time of the convergence point is calculated.
  • the dwell time of the convergence point is the time when the user eyes on a certain position on the target object, and when the dwell time is greater than the preset threshold, the location of the convergence point is marked.
  • the preset threshold may be determined according to actual conditions, and embodiments of the present disclosure do not limit this.
  • a marker graphic (circular point, square or cross graphic) may be set, and the marking graphic is projected in front of the user through the AR device, and the user can observe through the eye.
  • the marker graphic allows you to observe the location of the marker's convergence point.
  • the position of the convergence point can be observed by the user's eyes through the AR device, and the location of the convergence point may not be in a position with the desired position on the target object, that is, not coincident.
  • the user can adjust the left and right eyesight by turning the head, changing the position, or rotating the eyeball.
  • the geometric parameters of the target object are calculated.
  • the user can issue an instruction by operating a button or virtual menu set on the AR device, ie, generating a location of the convergence point indicating the marker and the target object.
  • the instruction of the desired position coincidence when the AR device receives the instruction, or when the AR device directly detects that the convergence point coincides with the desired position on the target object, the distance between the location of the convergence point and the user may be calculated, etc., Calculate the geometric parameters of the target object.
  • the specific calculation method of the geometric parameters of the target object will be described in detail below, and details are not described herein again.
  • the desired position on the target object is a position on the target object that the user desires to see, and may be any position on the target object.
  • the desired position when determining the height of the target object, the desired position may be an upper edge position and a lower edge position of the target object; when determining the width of the target object, the desired position may be a left edge position and a right edge position of the target object.
  • the geometric quantity measuring method determines the left and right eyesight lines of the user according to the left and right eye features, thereby determining the convergence point of the left and right eyesight lines, and when the position of the marked convergence point coincides with the desired position on the target object, Calculating the distance between the location of the convergence point and the user, which is the distance between the desired location on the target object and the user, which distance allows the user to know the distance from a certain location on the target object, thereby Distance Calculates the geometric parameters of the target object. Therefore, the function of the AR device can be extended by the above method, and the user experience of the AR device is increased.
  • a button or a virtual menu that can generate a measurement instruction can be set on the AR device, and a measurement instruction is generated when the user operates the button or menu, and the AR device performs the measurement function mode. After that, the AR device starts to perform the measurement method, acquires left and right eye features, calculates the dwell time of the convergence point, calculates the distance between the convergence point and the user, and the like, when the measurement instruction is not received, the AR device can implement the existing Function, avoiding AR setting to execute the above method in real time, causing unnecessary consumption.
  • determining the left and right eye line of sight directions of the user according to the left and right eye features when the obtained user gaze at the target object includes steps S11 to S12.
  • the steps S11 to S12 of the geometric quantity measuring method and their respective exemplary implementations are respectively described below.
  • Step S11 Determine the left and right eye pupil positions and the center positions of the left and right eyes based on the acquired left and right eye images when the user looks at the target object.
  • Step S12 Determine the left and right eyesight lines according to the left and right eye pupil positions and the center positions of the left and right eyes.
  • the left and right eye pupil positions and the center positions of the left and right eyes are recognized according to the acquired left and right eye images, thereby determining the left and right eye gaze directions.
  • the left eye pupil position and the left eye center position are determined from the left eye image by the image recognition algorithm
  • the right eye pupil position and the right eye center position are determined from the right eye image.
  • the center positions of the left and right eye images can be separately extracted by the center of gravity method.
  • the pupil contour, the Hough transform fitting method, and the double ellipse fitting algorithm can also be used to determine the pupil contour and obtain the feature points of the pupil image, and verify the contour of the pupil to determine the center position of the pupil.
  • the center positions of the left and right eyes may coincide with the center positions of the left and right eye pupils, or may not coincide with the center positions of the left and right eye pupils, and embodiments of the present disclosure do not limit this.
  • the left eye pupil position and the right eye pupil position may be the position a1 of the center of the left eye pupil and the position a2 of the center of the right eye pupil, and the center position of the left eye may refer to the entire left.
  • the center of the eye region for example, schematically shows that both the left eye region and the right eye region are an elliptical region, and the center position b1 of the left eye may refer to the center of the elliptical left eye region, similarly, right.
  • the center position b2 of the eye may refer to the center of the entire elliptical right eye region.
  • the left eye line of sight may be determined according to the left eye pupil position and the left eye center position, as shown in FIG. 3, the left eye line of sight M1 is the center position b1 through the left eye and the left eye pupil a1, and extends to the target object.
  • a line segment at a location on A may be a desired position or a desired position, which is not limited by the embodiment of the present disclosure.
  • the right eye line of sight M2 is a line segment that passes through the center position b2 of the right eye and the right eye pupil a2 and extends to a certain position on the target object A, and the position of the left eye line of sight and the right eye line of sight (ie, the point of convergence) Position O) is a position on the target object.
  • the position of the left and right eye pupils and the center position of the left and right eyes may not be accurate positions, and there may be a certain error, and the determined left and right eyesight lines may also have a certain error, and the calculated location of the convergence point and the user. There is also an error in the distance between them, but these are allowed.
  • This method only needs to calculate the approximate distance between the position of the convergence point (that is, a certain position on the target object) and the user.
  • the geometric parameters of the calculation target object described in the above step S40 include steps S41 to S42.
  • the steps S41 to S42 of the geometric quantity measuring method and their respective exemplary implementations are respectively described below.
  • Step S41 When the positions of the plurality of convergence points coincide with the plurality of desired positions on the target object, the distances between the positions of the plurality of convergence points and the user are respectively calculated.
  • Step S42 Determine geometric parameters of the target object according to the calculated multiple distances between the locations of the plurality of convergence points and the user and the line of sight deflection angle when the user looks at different desired positions on the target object.
  • step S41 for example, the distance between the position of the convergence point and the user is calculated from the distance between the left and right eye line of sight and the straight line angle between the center line of the left and right eyes and the center position of the left and right eyes.
  • the straight line passing through the center position of the left and right eyes is N
  • the complementary angle of the angle between the left eye line of sight M1 and the straight line N is ⁇ 1
  • the complementary angle ⁇ 1 may also be referred to as the left eye pupil deflection angle.
  • the angle of the angle between the right eye line of sight M2 and the straight line N is ⁇ 2, and the angle ⁇ 2 can also be referred to as the right eye pupil deflection angle;
  • the linear distance of the center position of the left and right eyes is PD, that is, the left eye pupil and
  • the distance between the pupils of the right eye is PD, and the linear distance of the center position of the left and right eyes may be a preset value, or a value calculated in real time according to the left eye pupil position and the right eye pupil position.
  • the geometric parameters of the target object may be determined based on the calculated plurality of distances of the location of the convergence point and the plurality of distances between the users, and the angle of view of the line of sight when the user is looking at different desired positions on the target object.
  • the geometric parameters of the target object may be further determined based on the plurality of calculated distances and the line-of-sight deflection angles when the user looks at different desired positions on the target object.
  • the plurality of distances between the locations of the plurality of convergence points and the user include a first distance between the location of the convergence point that coincides with a desired location on the first edge on the target object and the user, and with the target object
  • the location of the convergence point at which the desired location coincides on the second edge coincides with the second distance between the users.
  • the first edge and the second edge are opposite edges, for example, an upper edge and a lower edge, or a left edge and a right edge, and the like, which is not limited by the embodiment of the present disclosure.
  • the convergence point of the left and right eyesight lines is determined to be the first convergence according to the left and right eyesight lines.
  • a point the dwell time of the first convergence point is greater than a preset threshold
  • the position of the first convergence point is marked as point O1
  • the position O1 of the first convergence point and the first expectation on the target object A are received at the indication mark
  • the command of the position for example, the desired position on the first edge
  • the first distance between the position O1 of the first convergence point and the user is calculated as L1.
  • the convergence point of the left and right eyesight lines is determined as the second convergence point according to the left and right eyesight lines, and the second The dwell time of the convergence point is greater than a preset threshold, the position of the second convergence point is marked as O2 point, and the position O2 of the second convergence point that receives the indication mark and the second desired position on the target object A (for example, When the desired position on the two edges is coincident, the second distance between the position O2 of the second convergence point and the user is calculated as L2.
  • the head deflection angle when the user is looking at different desired positions on the target object can be obtained from the gyroscope on the AR device (eg, when the user looks at the first desired position to the second desired position).
  • the head deflection angle can be used as the line of sight deflection angle.
  • the line of sight deflection angle is: the user looks at the first desired position and the user's gaze and the second convergence point that coincide with the position O1 of the first convergence point.
  • the first distance L1, the second distance L2, the line of sight deflection angle ⁇ , and the position O1 of the first convergence point (ie, the first desired position) and the position O2 of the second convergence point (ie, the second desired position) has the following relationship:
  • H is the distance between two desired positions on the target object A.
  • the distance is one of geometric parameters of the target object, such as the width, height or thickness of the target object A.
  • the first distance L1 is the distance between the position of the first convergence point O1 that coincides with the desired position on the upper edge and the user;
  • the second convergence is as described above
  • the position O2 of the point is another position (eg, a second desired position) on the lower edge (eg, the second edge) of the target object A, and the position O2 of the second convergence point and the second expectation on the lower edge
  • the position is coincident, and the second distance L2 is the distance between the position of the second convergence point O2 and the user that coincides with the desired position on the lower edge, and the first desired position on the upper edge can be calculated according to the above formula.
  • a distance H between the second desired positions on the lower edge which is the height of the target object.
  • other geometric parameters of the target object such as the width of the target object, may be determined by reference to the above method (eg, left and right edges). Period The thickness distance between the position) and the target object, or the distance between the other portion of the target object.
  • the foregoing is only a method for calculating the geometric parameters of the target object, and the geometric parameters of the target object may be determined by using other calculation methods according to the plurality of distances and the plurality of line-of-sight deflection angles, and are not limited to the embodiment. The method described.
  • the flow of the geometric quantity measurement method of the embodiment of the present disclosure may include more or less operations, which may be performed sequentially or in parallel.
  • the flow of the training method described above includes a plurality of operations occurring in a specific order, it should be clearly understood that the order of the plurality of operations is not limited.
  • the geometric quantity measuring method described above may be performed once or multiple times according to predetermined conditions.
  • the geometric quantity measurement method provided by at least one embodiment of the present disclosure can calculate the geometric parameters of the target object according to the left and right eye images, thereby expanding the functions of the AR device and increasing the user experience of the AR device.
  • FIG. 5A is a schematic block diagram of a geometric quantity measuring apparatus according to at least one embodiment of the present disclosure.
  • the geometric quantity measuring device 05 includes a line of sight determining unit 501, a convergence point determining unit 502, and a geometric parameter calculating unit 504.
  • the gaze determining unit 501 is configured to acquire left and right eye images when the user looks at the target object, and determine left and right eye gaze based on the acquired left and right eye images when the user gaze at the target object.
  • the line of sight determining unit 501 can implement the step S10, and the specific implementation method can refer to the related description of step S10, and details are not described herein again.
  • the convergence point determining unit 502 is configured to determine a convergence point of the left and right eyesight lines according to the left and right eyesight lines.
  • the convergence point determining unit 502 can implement the step S20, and the specific implementation method can refer to the related description of step S20, and details are not described herein again.
  • the geometric parameter calculation unit 504 is configured to calculate a distance between the location of the convergence point and the user upon receiving an instruction to indicate that the location of the convergence point of the marker coincides with the desired location on the target object.
  • the geometric parameter calculation unit 504 can implement the step S40, and the specific implementation method can refer to the related description of step S40, and details are not described herein again.
  • the geometric quantity measuring device 05 further includes a position marking unit 503 of the convergence point.
  • the location marking unit 503 of the convergence point is configured to mark the location of the convergence point when the residence time of the convergence point is greater than a preset threshold.
  • the geometric parameter calculation unit 504 is configured to calculate the geometric parameters of the target object when the position of the marked convergence point coincides with the desired position on the target object.
  • the location marking unit 503 of the convergence point may implement step S30, and the specific implementation method may refer to the related description of step S30, and details are not described herein again.
  • the visual line determining unit 501 includes a pupil position and a center position determining subunit 5011 and a line of sight acquiring subunit 5012 of the left and right eyes.
  • the pupil position and the center position determining sub-unit 5011 of the left and right eyes are configured to determine the left and right eye pupil positions and the center positions of the left and right eyes based on the acquired left and right eye images when the user looks at the target object.
  • the line of sight acquisition subunit 5012 is configured to determine the left and right eyesight lines based on the left and right eye pupil positions and the center positions of the left and right eyes.
  • geometric parameter calculation unit 504 includes distance calculation sub-unit 5041 and geometric parameter determination sub-unit 5042.
  • the distance calculation sub-unit 5041 is configured to calculate the distance between the location of the convergence point and the user when the position of the marked convergence point coincides with the desired position on the target object.
  • the distance calculation sub-unit 5041 is configured to calculate the position and user of the convergence point according to the complementary angle between the left and right eye line of sight and the line of the center position of the left and right eyes and the linear distance of the center position of the left and right eyes. the distance between.
  • the geometric parameter determining subunit 5042 is configured to determine a geometric parameter of the target object according to the calculated plurality of distances between the positions of the plurality of convergence points and the user and the line of sight deflection angle when the user looks at different desired positions on the target object.
  • the geometric quantity device provided by the present disclosure can expand the function of the AR device and increase the user experience of the AR device.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, ie may be located in one place, or may be distributed over multiple network elements; The above units may be combined into one unit, or may be further split into a plurality of subunits.
  • the device in this embodiment may be implemented by means of software, or by software plus necessary general hardware, and may also be implemented by hardware.
  • the technical solution of the present invention may be embodied in the form of a software product in essence or in the form of a software product, and the software implementation is taken as an example, as a device in a logical sense, by applying the The processor in which the augmented reality AR device of the device is located reads the corresponding computer program instructions in the non-volatile memory into memory.
  • the geometric quantity measuring apparatus may include more or less circuits, and the connection relationship between the respective circuits is not limited, and may be determined according to actual needs.
  • the specific configuration of each circuit is not limited, and may be composed of an analog device according to the circuit principle, a digital chip, or other suitable manner.
  • FIG. 5D is a schematic block diagram of another geometric quantity measuring apparatus according to at least one embodiment of the present disclosure.
  • the geometric quantity measuring device 200 includes a processor 210, a machine readable storage medium 220, and one or more computer program modules 221.
  • processor 210 is coupled to machine readable storage medium 220 via bus system 230.
  • one or more computer program modules 221 are stored in machine readable storage medium 220.
  • one or more computer program modules 221 include instructions for performing the geometric quantity measurement methods provided by any of the embodiments of the present disclosure.
  • instructions in one or more computer program modules 221 can be executed by processor 210.
  • the bus system 230 can be a conventional serial, parallel communication bus, etc., and embodiments of the present disclosure do not limit this.
  • the processor 210 can be a central processing unit (CPU), an image processor (GPU), or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and can be a general purpose processor or a dedicated processor, and Other components in the geometric quantity measuring device 200 can be controlled to perform the desired functions.
  • CPU central processing unit
  • GPU image processor
  • Other components in the geometric quantity measuring device 200 can be controlled to perform the desired functions.
  • Machine-readable storage medium 220 can include one or more computer program products, which can include various forms of computer-readable storage media, such as volatile memory and/or nonvolatile memory.
  • the volatile memory may include, for example, random access memory (RAM) and/or cache or the like.
  • the nonvolatile memory may include, for example, a read only memory (ROM), a hard disk, a flash memory, or the like.
  • One or more computer program instructions can be stored on a computer readable storage medium, and the processor 210 can execute the program instructions to implement the functions (implemented by the processor 210) and/or other desired functions in the disclosed embodiments. For example, geometric measurement methods and the like.
  • Various applications and various data such as aggregation points and various data used and/or generated by the application, etc., may also be stored in the computer readable storage medium.
  • At least one embodiment of the present disclosure also provides an augmented reality device.
  • 5E-6 are schematic diagrams of an augmented reality device according to at least one embodiment of the present disclosure.
  • the augmented reality device 1 includes the geometric quantity measuring device 100/200 provided by any embodiment of the present disclosure, and the geometric quantity measuring device 100/200 may specifically refer to the correlation of FIG. 5A to FIG. 5D. Description, no longer repeat here.
  • the augmented reality device 1 can be attached to the eye of a person, and the target object (not shown) can be located in front of the person, so that the geometric measurement function can be realized as needed.
  • the AR device 1 includes a processor 101 and a machine readable storage medium 102, and may further include a non-volatile medium 103, a communication interface 104, and a bus 105.
  • the machine readable storage medium 102, the processor 101, the nonvolatile medium 103, and the communication interface 104 complete communication with each other via the bus 105.
  • the processor 101 reads and executes machine executable instructions in the machine readable storage medium 102 corresponding to the control logic of the geometric quantity measurement method.
  • the communication interface 104 is coupled to a communication device (not shown).
  • the communication device can communicate with the network and other devices via wireless communication, such as the Internet, an intranet, and/or a wireless network such as a cellular telephone network, a wireless local area network (LAN), and/or a metropolitan area network (MAN) ).
  • Wireless communication can use any of a variety of communication standards, protocols, and technologies including, but not limited to, Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (W-CDMA).
  • GSM Global System for Mobile Communications
  • EDGE Enhanced Data GSM Environment
  • W-CDMA Wideband Code Division Multiple Access
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • Wi-Fi eg based on IEEE 802.11a, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n standards
  • VoIP Internet Protocol-based voice transmission
  • Wi-MAX protocols for email, instant messaging, and/or short message service (SMS), or any other suitable communication protocol.
  • the machine-readable storage medium referred to in the embodiments of the present disclosure may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information such as executable instructions, data, and the like.
  • the machine-readable storage medium may be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as a hard disk drive), any type of storage disk (such as a disk). , dvd, etc.), or a similar storage medium, or a combination thereof.
  • the non-volatile medium 103 can be a non-volatile memory, a flash memory, a storage drive (such as a hard drive), any type of storage disk (such as a compact disc, dvd, etc.), or a similar non-volatile storage medium, or a combination thereof. .
  • the embodiment of the present disclosure does not give all the constituent elements of the AR device 1.
  • those skilled in the art can provide and set other component units not shown according to specific needs, which is not limited by the embodiments of the present disclosure.
  • the augmented reality device provided by the embodiment of the present disclosure can determine a distance between a desired position on the target object and the user, and the distance can be used for the user to know the distance from a certain position on the target object, thereby expanding the function of the AR device. , increased the user experience of AR devices.
  • An embodiment of the present disclosure also provides a storage medium.
  • the storage medium 400 stores computer readable instructions 401 non-transitoryly, and may perform any implementation of the present disclosure when the non-transitory stored computer readable instructions 401 are executed by a computer (including a processor).
  • the geometric quantity measurement method provided by the example.
  • the storage medium may be any combination of one or more computer readable storage media, such as a computer readable storage medium containing computer readable program code for obtaining left and right eyesight, and another computer readable storage medium including deterministic aggregation Point computer readable program code.
  • a computer readable storage medium containing computer readable program code for obtaining left and right eyesight
  • another computer readable storage medium including deterministic aggregation Point computer readable program code.
  • the computer can execute the program code stored in the computer storage medium to perform a geometric quantity measurement method such as provided by any of the embodiments of the present disclosure.
  • the storage medium may include a memory card of a smart phone, a storage unit of a tablet, a hard disk of a personal computer, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM), Portable compact disk read only memory (CD-ROM), flash memory, or any combination of the above storage media may be other suitable storage media.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • CD-ROM Portable compact disk read only memory
  • flash memory or any combination of the above storage media may be other suitable storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种几何量测量方法及其装置、增强现实设备和存储介质,几何量测量方法包括:获取用户注视目标物体时的左右眼部图像,基于获取到的用户注视目标物体时的左右眼部图像确定左右眼视线;根据左右眼视线确定左右眼视线的汇聚点;当汇聚点与目标物体上的期望位置重合时,计算目标物体的几何参数。本方法可以扩展AR设备的功能,增加AR设备的用户体验。

Description

几何量测量方法及其装置、增强现实设备和存储介质
本申请要求于2018年4月28日递交的中国专利申请第201810401274.X号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本公开至少一实施例涉及一种几何量测量方法及其装置、增强现实设备和存储介质。
背景技术
增强现实技术(Augmented Reality,简称AR)是一种将真实世界的信息和虚拟信息进行融合的新技术,其特点是将虚拟信息应用到真实环境中,可将真实环境中的实物和虚拟的信息融合到同一个画面或者是空间中,从而达到超越现实的感官体验。
现有的虚拟现实系统主要是通过带有中央处理器的高性能运算系统模拟一个虚拟的三维世界,并提供给使用者视觉、听觉等的感官体验,从而让使用者犹如身临其境,同时还可以进行人机互动。
发明内容
本公开至少一实施例提供一种几何量测量方法,包括:获取用户注视目标物体时的左右眼部图像,并基于获取到的所述用户注视所述目标物体时的左右眼部图像确定左右眼视线;根据所述左右眼视线确定所述左右眼视线的汇聚点;当所述汇聚点与所述目标物体上的期望位置重合时,计算所述目标物体的几何参数。
例如,本公开至少一实施例提供的几何量测量方法,还包括:当所述汇聚点的停留时间大于预设阈值时,标记所述汇聚点的位置。
例如,在本公开至少一实施例提供的几何量测量方法中,当所述标记的汇聚点的位置与所述目标物体上的期望位置重合时,计算所述目标物体的几 何参数。
例如,在本公开至少一实施例提供的几何量测量方法中,基于获取到的所述用户注视所述目标物体时的左右眼部图像确定左右眼视线,包括:基于获取到的所述用户注视所述目标物体时的左右眼部图像确定左右眼瞳孔位置和左右眼的中心位置;根据所述左右眼瞳孔位置和所述左右眼的中心位置确定所述左右眼视线。
例如,在本公开至少一实施例提供的几何量测量方法中,计算所述目标物体的几何参数包括:当所述汇聚点的位置与所述目标物体上的期望位置重合时,计算所述汇聚点的位置与所述用户之间的距离;根据计算出的多个汇聚点的位置与所述用户之间的多个距离以及所述用户注视所述目标物体上的不同期望位置时的视线偏转角度,确定所述目标物体的几何参数。
例如,在本公开至少一实施例提供的几何量测量方法中,计算所述汇聚点的位置与所述用户之间的距离,包括:根据所述左右眼视线与经过所述左右眼的中心位置的直线之间的夹角的余角以及所述左右眼的中心位置的直线距离计算所述汇聚点的位置与所述用户之间的距离。
例如,在本公开至少一实施例提供的几何量测量方法中,所述多个汇聚点的位置与所述用户之间的多个距离包括与所述目标物体上的第一边缘上的期望位置重合的汇聚点的位置与所述用户之间的第一距离,和与所述目标物体上的第二边缘上的期望位置重合的汇聚点的位置与所述用户之间的第二距离;所述第一边缘是所述目标物体上与所述第二边缘相对设置的边缘。
例如,在本公开至少一实施例提供的几何量测量方法中,所述用户注视所述目标物体上的不同期望位置时的视线偏转角度为所述第一距离与所述第二距离之间的角度。
例如,在本公开至少一实施例提供的几何量测量方法中,所述目标物体的几何参数包括:所述目标物体的高度、所述目标物体的宽度或所述目标物体的厚度等。
例如,在本公开至少一实施例提供的几何量测量方法中,在接收到用于指示所述标记的汇聚点的位置与所述目标物体上的期望位置重合的指令时,计算所述目标物体的几何参数。
例如,在本公开至少一实施例提供的几何量测量方法中,所述几何量测 量方法用于增强现实设备。
本公开至少一实施例还提供一种几何量测量装置,包括:视线确定单元、汇聚点确定单元和几何参数计算单元。视线确定单元配置为获取用户注视目标物体时的左右眼部图像,并基于获取到的所述用户注视所述目标物体时的左右眼部图像确定左右眼视线;汇聚点确定单元配置为根据所述左右眼视线确定所述左右眼视线的汇聚点;几何参数计算单元配置为当所述汇聚点与目标物体上的期望位置重合时,计算所述目标物体的几何参数。
例如,本公开至少一实施例提供的几何量测量装置,还包括:汇聚点的位置标记单元。汇聚点的位置标记单元配置为当所述汇聚点的停留时间大于预设阈值时,标记所述汇聚点的位置;所述几何参数计算单元配置为当所述标记的汇聚点的位置与所述目标物体上的期望位置重合时,计算所述目标物体的几何参数。
例如,在本公开至少一实施例提供的几何量测量装置中,所述视线确定单元包括:瞳孔位置和左右眼的中心位置确定子单元和视线获取子单元。瞳孔位置和左右眼的中心位置确定子单元配置为基于获取到的用户注视目标物体时的左右眼部图像确定左右眼瞳孔位置和左右眼的中心位置;视线获取子单元配置为根据所述左右眼瞳孔位置和所述左右眼的中心位置确定左右眼视线。
例如,在本公开至少一实施例提供的几何量测量装置中,所述几何参数计算单元包括:距离计算子单元和几何参数计算子单元。距离计算子单元配置为当所述标记的汇聚点的位置与所述目标物体上的期望位置重合时,计算所述汇聚点的位置与所述用户之间的距离;几何参数计算子单元配置为根据计算出的多个汇聚点的位置与所述用户之间的多个距离以及所述用户注视目标物体上的不同期望位置时的视线偏转角度,确定所述目标物体的几何参数。
例如,在本公开至少一实施例提供的几何量测量装置中,所述距离计算子单元配置为根据所述左右眼视线与经过所述左右眼的中心位置的直线之间的夹角的余角以及所述左右眼的中心位置的直线距离计算所述汇聚点的位置与所述用户之间的距离。
本公开至少一实施例还提供一种几何量测量装置,包括:处理器;机器可读存储介质,存储有一个或多个计算机程序模块;所述一个或多个计算机 程序模块被存储在所述机器可读存储介质中并被配置为由所述处理器执行,所述一个或多个计算机程序模块包括用于执行实现本公开任一实施例提供的几何量测量方法的指令。
本公开至少一实施例还提供一种增强现实设备,包括本公开任一实施例提供的几何量测量装置。
本公开至少一实施例还提供一种存储介质,非暂时性地存储计算机可读指令,当所述非暂时性存储的计算机可读指令由计算机执行时可以执行根据本公开任一实施例提供的几何量测量方法的指令。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述中的附图仅仅涉及本公开的一些实施例,而非对本公开的限制。
图1是本公开至少一实施例提供的一种几何量测量方法的流程图;
图2A是本公开至少一实施例提供的另一种几何量测量方法的流程图;
图2B是图1或图2A所示的步骤S40的流程图;
图3是本公开至少一实施例提供的一种左右眼视线与目标物体上的期望位置之间关系的示意图;
图4是本公开至少一实施例提供的另一种左右眼视线与目标物体上的期望位置之间关系的示意图;
图5A是本公开至少一实施例提供的一种几何量测量装置的示意框图;
图5B是图5A中所示的视线确定单元的示意框图;
图5C是图5A中所示的几何参数计算单元的示意框图;
图5D是本公开至少一实施例提供的另一种几何量测量装置的示意框图;
图5E是本公开至少一实施例提供的一种增强现实设备的示意框图;
图5F是本公开至少一实施例提供的一种增强现实设备的示意图;
图6是本公开至少一实施例提供的一种增强现实设备的硬件结构图;以及
图7是本公开至少一实施例提供的一种存储介质的示意框图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“一个”、“一”或者“该”等类似词语也不表示数量限制,而是表示存在至少一个。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
例如,增强现实AR设备可以具有人眼眼球的追踪功能,在这方面增强现实设备的功能还存在较大的可扩展空间。
本公开至少一实施例提供一种几何量测量方法,包括:获取用户注视目标物体时的左右眼部图像,并基于获取到的所述用户注视所述目标物体时的左右眼部图像确定左右眼视线;根据所述左右眼视线确定左右眼视线的汇聚点;当所述汇聚点与所述目标物体上的期望位置重合时,计算所述目标物体的几何参数。
本公开至少一实施例还提供一种对应于上述几何量测量方法的几何量测量装置、增强显示设备及存储介质。
本公开上述实施例提供的几何量测量方法,可以根据用户左右眼部的图像确定目标物体上的期望位置与用户之间的距离,以获取目标物体的几何参数,从而扩展了AR设备的功能,增加了AR设备的用户体验。
下面结合附图对本公开的实施例进行详细说明。
本公开至少一实施例提供一种几何量测量方法,可用于AR设备或VR(Virtual Reality,简称VR)装置等,本公开的实施例对此不作限制,从而可进一步扩展AR/VR设备的功能,提升AR/VR设备的用户体验。下面以该几何量测量方法用于AR设备为例进行说明。
例如,该几何量测量方法可以至少部分以软件的方式实现,并由AR设备中的处理器加载并执行,或至少部分以硬件或固件等方式实现,以扩展AR设备的功能,提升AR设备的用户体验。
图1本公开至少一实施例提供的一种几何量测量方法的流程图。如图1所示,例如,在一些示例中,该几何量测量方法包括步骤S10、步骤S20和步骤S40;在另一些示例中,该几何量测量方法还包括步骤S30。下面对该几何量测量方法的步骤S10至步骤S40以及它们各自的示例性实现方式分别进行介绍。
步骤S10:获取用户注视目标物体时的左右眼部图像,并基于获取到的用户注视目标物体时的左右眼部图像确定左右眼视线。
步骤S20:根据左右眼视线方向确定左右眼视线的汇聚点。
步骤S30:当汇聚点的停留时间大于阈值时,标记汇聚点的位置。
步骤S40:当汇聚点与目标物体上的期望位置重合时,计算汇聚点的位置与用户之间的距离。
对于步骤S10,例如,用户在佩戴AR设备后可观察到入射至用户两眼视野范围内的光线,该光线可被眼睛反射,被两眼(包括左眼和右眼)反射的光线可通过摄像装置或者一些专门的光学传感器接收,据此可获取左右眼部图像。
例如,该摄像装置可以包括CMOS(互补金属氧化物半导体)传感器、CCD(电荷耦合器件)传感器、红外摄像头等。例如,摄像装置可以设置在OLED显示屏所在的平面内,例如设置在AR设备的边框上。
例如,通过对左右眼部图像进行图像识别和特征提取等方法可以获取左 右眼部特征,包括左眼眼部特征和右眼眼部特征。例如,眼部特征可以包括眼睛的瞳孔中心、瞳孔大小、角膜反射信息、虹膜中心、虹膜尺寸等特征,进一步的根据眼部特征进行运算处理可以确定左右眼视线,包括左眼视线和右眼视线。例如,视线指用户观察目标物体时眼睛注视目标物体上某个位置时,眼睛与目标物体上该位置的直线。
例如,可以预先搜集大量的(例如,10000张或更多张)包括左右眼部的图像作为样本库,并对样本库中的图像进行特征提取。然后,使用样本库中的图像和提取的特征点通过机器学习(例如深度学习,或者基于局部特征的回归算法)等算法对分类模型进行训练和测试,以得到获取用户的左右眼部图像的分类模型。例如,该分类模型也可以通过本领域内的其他常规算法例如支持向量机(Support Vector Machine,SVM)等实现,本公开的实施例对此不作限制。需要注意的是,该机器学习算法可以采用本领域内的常规方法实现,在此不再赘述。例如,该分类模型的输入为采集的图像,输出为用户的左右眼部图像,从而可以实现图像识别。
例如,眼部特征点的提取可以采用尺度不变特征变换(Scale-invariant Feature Transform,SIFT)特征提取算法、方向梯度直方图(Histogram of Oriented Gradient,HOG)特征提取算法以及本领域内的其他常规算法实现,本公开的实施例对此不作限制。
对于步骤S20,例如,两眼在注视目标物体某个位置时,左眼视线和右眼视线会汇聚在该位置,汇聚的位置在本公开各个实施例中称为汇聚点。
在用户佩戴AR设备后,可观察到其视野范围内的真实环境中的实际物体和投影到用户眼前的虚拟信息,虚拟信息和实际物体可融合到同一个画面或者是空间中,达到超越现实的感官体验。
对于步骤S40,例如,当汇聚点与所述目标物体上的期望位置重合时,计算目标物体的几何参数。例如,该目标物体的几何参数包括:目标物体的高度、目标物体的宽度或目标物体的厚度等,本公开的实施例对此不作限制。
在计算目标物体的几何参数时,例如,在一些示例中,根据步骤S30,例如,本公开至少一实施例中的目标物体指用户观察到的真实环境中的实际物体,如果用户注视目标物体的某个位置的时间较长,用户可能对该目标物体较感兴趣,想进一步了解该目标物体的详细信息等。据此,本公开至少一 实施提供的几何量测量方法中,对汇聚点的停留时间进行计算。例如,汇聚点的停留时间为用户两眼注视目标物体上的某个位置的时间,当该停留时间大于预设阈值时,标记该汇聚点的位置。例如,该预设阈值可根据实际情况而定,本公开的实施例对此不作限制。
例如,标记汇聚点的位置的方法可能有多种,例如可设置一标记图形(圆形点、方框或者十字交叉图形),通过AR设备将标记图形投影在用户前方,用户可通过眼睛观察到该标记图形,即可观察到标记的汇聚点的位置。
例如,对汇聚点的位置进行标记后,用户两眼可通过AR设备可观察到该汇聚点的位置,该汇聚点的位置有可能与目标物体上的期望位置不在一个位置,即不重合,此时用户可通过转动头部、改变位置或者转动眼球等调整左右眼视线,当标记的汇聚点与目标物体上的期望位置重合时,计算目标物体的几何参数。
例如,在一些示例中,当汇聚点与目标物体上的期望位置重合时,用户可通过操作设置在AR设备上的按钮或者虚拟菜单发出指令,即生成指示标记的汇聚点的位置与目标物体上的期望位置重合的指令,当AR设备接收到该指令时,或当AR设备直接检测到汇聚点与目标物体上的期望位置重合时,可计算汇聚点的位置与用户之间的距离等,以计算目标物体的几何参数。例如,该目标物体的几何参数的具体计算方法将在下面进行详细地介绍,在此不再赘述。
例如,上述目标物体上的期望位置为用户期望看到的目标物体上某个位置,可以是目标物体上的任何一个位置。例如,在确定目标物体的高度时,该期望位置可以是目标物体的上边缘位置和下边缘位置;在确定目标物体的宽度时,该期望位置可以是目标物体的左边缘位置和右边缘位置。
由上述描述可知,该几何量测量方法,根据左右眼部特征确定用户的左右眼视线,进而确定左右眼视线的汇聚点,当标记的汇聚点的位置与目标物体上的期望位置重合时,可计算出汇聚点的位置与用户之间的距离,该距离也就是目标物体上的期望位置与用户之间的距离,该距离可供用户了解自身距离目标物体上某个位置的距离,从而根据该距离计算目标物体的几何参数。因此,通过上述方法可以扩展AR设备的功能,增加AR设备的用户体验。
需要说明的是,可在AR设备上设置可以生成测量指令的按钮或者虚拟 菜单,当用户操作该按钮或菜单时会生成测量指令,此时AR设备进行测量功能模式。之后,AR设备开始执行该测量方法,获取左右眼部特征,并计算汇聚点的停留时间,计算汇聚点与用户之间的距离等,当未接收到测量指令时,AR设备可实现已有的功能,避免AR设置实时执行上述方法,造成不必要的消耗。
在一些实施方式中,如图2A所示,上述步骤S10所述的根据获取到的用户注视目标物体时的左右眼部特征确定用户的左右眼视线方向,包括包括步骤S11至步骤S12。下面对该几何量测量方法的步骤S11至步骤S12以及它们各自的示例性实现方式分别进行介绍。
步骤S11:基于获取到的用户注视目标物体时的左右眼部图像确定左右眼瞳孔位置和左右眼的中心位置。
步骤S12:根据左右眼瞳孔位置和左右眼的中心位置确定左右眼视线。
例如,在本公开至少一实施例中,在确定左右眼视线方向时,根据获取左右眼部图像识别出左右眼瞳孔位置和左右眼的中心位置,进而确定左右眼视线方向。具体而言,通过图像识别算法从左眼图像中确定出左眼瞳孔位置和左眼的中心位置,从右眼图像中确定出右眼瞳孔位置和右眼的中心位置。
例如,获取左右眼部图像之后,可以通过重心法分别提取左右眼部图像的中心位置。例如,也可以通过Canny边缘检测算法、Hough变换拟合法以及双椭圆拟合算法等方法确定瞳孔轮廓以及获得瞳孔图像的特征点,并验证拟合瞳孔的轮廓,以确定瞳孔的中心位置。
例如,左右眼的中心位置可以与左右眼瞳孔的中心位置重合,也可以与左右眼瞳孔的中心位置不重合,本公开的实施例对此不作限制。例如,在一些示例中,参照图3所示,左眼瞳孔位置和右眼瞳孔位置可以是左眼瞳孔中心所在位置a1和右眼瞳孔中心所在的位置a2,左眼的中心位置可以指整个左眼区域的中心,例如图3中示意性的示出了左眼区域和右眼区域均为一椭圆形区域,左眼的中心位置b1可以指椭圆形的左眼区域的中心,同样的,右眼的中心位置b2可以指整个椭圆形的右眼区域的中心。
例如,根据左眼瞳孔位置和左眼的中心位置可确定左眼视线,如图3中所示的,左眼视线M1为通过左眼的中心位置b1和左眼瞳孔a1,且延伸到目标物体A上的某一位置的线段。例如,该标物体A上的某一位置可以是期 望位置,也可以不是期望位置,本公开的实施例对此不作限制。例如,右眼视线M2为通过右眼的中心位置b2和右眼瞳孔a2,且延伸到目标物体A上的某一位置的线段,左眼视线和右眼视线汇聚后的位置(即汇聚点的位置O)为目标物体上的某个位置。
需要说明的是,上述左右眼瞳孔位置和左右眼的中心位置可能并非准确的位置,会有一定的误差,确定出的左右眼视线也会有一定的误差,计算出的汇聚点的位置与用户之间的距离也会有误差,但是这些都是允许的,本方法只需要计算出汇聚点的位置(也就目标物体上的某个位置)与用户之间的大致距离即可。
在一些示例中,如图2B所示,上述步骤S40所述的计算目标物体的几何参数,包括步骤S41至步骤S42。下面对该几何量测量方法的步骤S41至步骤S42以及它们各自的示例性实现方式分别进行介绍。
步骤S41:当多个汇聚点的位置与目标物体上的多个期望位置重合时,分别计算多个汇聚点的位置与用户之间的距离。
步骤S42:根据计算出的多个汇聚点的位置与用户之间的多个距离以及所述用户注视所述目标物体上的不同期望位置时的视线偏转角度,确定目标物体的几何参数。
对于步骤S41,例如,根据左右眼视线与经过左右眼的中心位置直线之间的夹角的余角和左右眼的中心位置的直线距离计算汇聚点的位置与用户之间的距离。
例如,如图3所示,经过左右眼的中心位置直线为N,左眼视线M1与该直线N之间的夹角的余角为θ1,该余角θ1也可称为左眼瞳孔偏转角度;右眼视线M2与该直线N之间的夹角的余角为θ2,该夹角θ2也可称为右眼瞳孔偏转角度;左右眼的中心位置的直线距离为PD,即左眼瞳孔和右眼瞳孔之间的距离为PD,左右眼的中心位置的直线距离可以为预设值,或者根据左眼瞳孔位置和右眼瞳孔位置实时计算出的值。
例如,根据几何关系可知,汇聚点的位置O与用户之间的距离L,左右眼的中心位置的直线距离PD和上述的左眼瞳孔偏转角度θ1和右眼瞳孔偏转角度θ2之间存在如下关系:
L*tanθ1+L*tanθ2=PD
则由下述公式可以计算出汇聚点的位置O与用户之间的距离L为:
Figure PCTCN2019084110-appb-000001
例如,在一些示例中,可以根据计算出的多个汇聚点的位置与用户之间的多个距离,以及用户注视目标物体上的不同期望位置时的视线偏转角度确定目标物体的几何参数。
对于步骤S42,例如,在本公开至少一实施例中,可以根据上述计算出的多个距离以及用户注视目标物体上的不同期望位置时的视线偏转角度,进一步确定目标物体的几何参数。
例如,多个汇聚点的位置与用户之间的多个距离包括与目标物体上的第一边缘上的期望位置重合的汇聚点的位置与用户之间的第一距离,和与目标物体上的第二边缘上的期望位置重合的汇聚点的位置与用户之间的第二距离。例如,第一边缘和第二边缘为相对设置的两个边缘,例如,为上边缘和下边缘,或左边缘与右边缘等,本公开的实施例对此不作限制。
例如,如图4所示,假设用户左右眼持续注视目标物体A上的某一位置(例如,第一边缘上的期望位置)时,根据左右眼视线确定左右眼视线的汇聚点为第一汇聚点,该第一汇聚点的停留时间大于预设阈值,标记该第一汇聚点的位置为O1点,在接收到指示标记的该第一汇聚点的位置O1与目标物体A上的第一期望位置(例如,第一边缘上的期望位置)重合的指令时,计算该第一汇聚点的位置O1与用户之间的第一距离为L1。同样地,用户左右眼持续的注视目标物体A上的另一位置(例如,第二边缘上的期望位置)时,根据左右眼视线确定左右眼视线的汇聚点为第二汇聚点,该第二汇聚点的停留时间大于预设阈值,标记该第二汇聚点的位置为O2点,在接收到指示标记的该第二汇聚点的位置O2与目标物体A上的第二期望位置(例如,第二边缘上的期望位置)重合的指令时,计算该第二汇聚点的位置O2与用户之间的第二距离为L2。
例如,可根据AR设备上的陀螺仪获取用户注视目标物体上的不同期望位置时的头部偏转角度(例如,由用户注视第一期望位置到注视第二期望位置时)。例如,该头部偏转角度可以作为视线偏转角度,例如,如图4所示,视线偏转角度为:用户注视与第一汇聚点的位置O1重合的第一期望位置和 用户注视与第二汇聚点的位置O2重合的第二期望位置时的偏转角度β,即第一距离L1和第二距离L2之间的偏转角度β。
根据几何关系可知,第一距离L1、第二距离L2、视线偏转角度β以及第一汇聚点的位置O1(即第一期望位置)与第二汇聚点的位置O2(即第二期望位置)之间的距离H存在如下关系:
Figure PCTCN2019084110-appb-000002
其中,H表示为目标物体A上的两个期望位置之间的距离。例如,该距离即为目标物体的几何参数之一,例如为目标物体A的宽度、高度或厚度等。
例如,若上述的第一汇聚点的位置O1为位于目标物体A上的上边缘(例如,第一边缘)上的某一位置(例如,第一期望位置),而第一汇聚点的位置O1与该上边缘上的第一期望位置重合,则上述的第一距离L1为与该上边缘上的期望位置重合的第一汇聚点O1的位置与用户之间的距离;若上述的第二汇聚点的位置O2为位于目标物体A的下边缘(例如,第二边缘)上的另一位置(例如,第二期望位置),而第二汇聚点的位置O2与该下边缘上的第二期望位置重合,则上述的第二距离L2为与该下边缘上的期望位置重合的第二汇聚点O2的位置与用户之间的距离,根据上述公式可以计算出上边缘上的第一期望位置和下边缘上的第二期望位置之间的距离H,该距离H即为目标物体的高度,类似的,可参照上述方法确定出目标物体的其他几何参数,例如目标物体的宽度(例如。左右边缘的期望位置之间的距离)和目标物体的厚度,或者目标物体上的其他部分之间的距离等。
当然,上述只是举例说明了一种计算目标物体的几何参数的方法,也可以根据多个距离和多个视线偏转角度,采用其他计算方式确定出目标物体的几何参数,并不限于本实施例所述的方法。
需要说明的是,本公开的实施例的几何量测量方法的流程可以包括更多或更少的操作,这些操作可以顺序执行或并行执行。虽然上文描述的训练方法的流程包括特定顺序出现的多个操作,但是应该清楚的了解,多个操作的顺序并不受限制。上文描述的几何量测量方法可以执行一次,也可以按照预定条件执行多次。
本公开至少一实施例提供的几何量测量方法可以根据左右眼图像计算目 标物体的几何参数,从而扩展了AR设备的功能,增加了AR设备的用户体验。
本公开至少一实施例还提供一种基于增强现实设备的几何量测量装置。图5A为本公开至少一实施例提供的一种几何量测量装置的示意框图。如图5A所示,在一些示例中,该几何量测量装置05包括视线确定单元501、汇聚点确定单元502以及几何参数计算单元504。
视线确定单元501,配置为获取用户注视目标物体时的左右眼部图像,并基于获取到的用户注视目标物体时的左右眼部图像确定左右眼视线。例如,该视线确定单元501可以实现步骤S10,其具体实现方法可以参考步骤S10的相关描述,在此不再赘述。
汇聚点确定单元502,配置为根据左右眼视线确定左右眼视线的汇聚点。例如,该汇聚点确定单元502可以实现步骤S20,其具体实现方法可以参考步骤S20的相关描述,在此不再赘述。
几何参数计算单元504,配置为在接收到用于指示标记的汇聚点的位置与目标物体上的期望位置重合的指令时,计算所述汇聚点的位置与用户之间的距离。例如,该几何参数计算单元504可以实现步骤S40,其具体实现方法可以参考步骤S40的相关描述,在此不再赘述。
例如,在另一些示例中,该几何量测量装置05还包括汇聚点的位置标记单元503。例如该汇聚点的位置标记单元503配置为当所述汇聚点的停留时间大于预设阈值时,标记所述汇聚点的位置。在该示例中,该几何参数计算单元504配置为当标记的汇聚点的位置与目标物体上的期望位置重合时,计算目标物体的几何参数。例如,该汇聚点的位置标记单元503可以实现步骤S30,其具体实现方法可以参考步骤S30的相关描述,在此不再赘述。
例如,在一些实施方式中,如图5B所示,视线确定单元501包括瞳孔位置和左右眼的中心位置确定子单元5011和视线获取子单元5012。
瞳孔位置和左右眼的中心位置确定子单元5011,配置为基于获取到的用户注视目标物体时的左右眼部图像确定左右眼瞳孔位置和左右眼的中心位置。
视线获取子单元5012,配置为根据左右眼瞳孔位置和左右眼的中心位置确定左右眼视线。
例如,在一些示例中,如图5C所示,几何参数计算单元504包括距离计算子单元5041和几何参数确定子单元5042。
例如,距离计算子单元5041配置为当标记的汇聚点的位置与目标物体上的期望位置重合时,计算汇聚点的位置与用户之间的距离。
例如,具体地,距离计算子单元5041配置为根据左右眼视线与经过左右眼的中心位置直线之间的夹角的余角以及左右眼的中心位置的直线距离计算所述汇聚点的位置与用户之间的距离。
几何参数确定子单元5042,配置为根据计算出的多个汇聚点的位置与用户之间的多个距离以及用户注视目标物体上的不同期望位置时的视线偏转角度,确定目标物体的几何参数。
与前述几何量测量方法的实施例相对应,本公开提供的几何量装置可扩展AR设备的功能,增加AR设备的用户体验。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,即可以位于一个地方,或者也可以分布到多个网络单元上;上述各单元可以合并为一个单元,也可以进一步拆分成多个子单元。
通过以上的实施方式的描述,本实施例的装置可借助软件的方式实现,或者软件加必需的通用硬件的方式来实现,当然也可以通过硬件实现。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,以软件实现为例,作为一个逻辑意义上的装置,是通过应用该装置的增强现实AR设备所在的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。
需要注意的是,在本公开的实施例提供的几何量测量装置可以包括更多或更少的电路,并且各个电路之间的连接关系不受限制,可以根据实际需求而定。各个电路的具体构成方式不受限制,可以根据电路原理由模拟器件构成,也可以由数字芯片构成,或者以其他适用的方式构成。
图5D为本公开至少一实施例提供的另一种几何量测量装置的示意框图。如图5D所示,该几何量测量装置200包括处理器210、机器可读存储介质220以及一个或多个计算机程序模块221。
例如,处理器210与机器可读存储介质220通过总线系统230连接。例 如,一个或多个计算机程序模块221被存储在机器可读存储介质220中。例如,一个或多个计算机程序模块221包括用于执行本公开任一实施例提供的几何量测量方法的指令。例如,一个或多个计算机程序模块221中的指令可以由处理器210执行。例如,总线系统230可以是常用的串行、并行通信总线等,本公开的实施例对此不作限制。
例如,该处理器210可以是中央处理单元(CPU)、图像处理器(GPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,可以为通用处理器或专用处理器,并且可以控制几何量测量装置200中的其它组件以执行期望的功能。
机器可读存储介质220可以包括一个或多个计算机程序产品,该计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。该易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。该非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器210可以运行该程序指令,以实现本公开实施例中(由处理器210实现)的功能以及/或者其它期望的功能,例如几何量测量方法等。在该计算机可读存储介质中还可以存储各种应用程序和各种数据,例如汇聚点以及应用程序使用和/或产生的各种数据等。
需要说明的是,为表示清楚、简洁,本公开实施例并没有给出该几何量测量装置200的全部组成单元。为实现几何量测量装置200的必要功能,本领域技术人员可以根据具体需要提供、设置其他未示出的组成单元,本公开的实施例对此不作限制。
关于不同实施例中的几何量测量装置100和几何量测量装置200的技术效果可以参考本公开的实施例中提供的几何量测量方法的技术效果,这里不再赘述。
本公开至少一实施例还提供一种增强现实设备。图5E-图6分别为本公开至少一实施例提供的一种增强现实设备的示意图。
如图5E所示,在一个示例中,该增强现实设备1包括本公开任一实施例提供的几何量测量装置100/200,几何量测量装置100/200具体可参考图5A至图5D的相关描述,在此不再赘述。
如图5F所示,该增强现实设备1可以配带在人的眼部,目标物体(图中未示出)可位于人的前方,从而可以根据需要实现几何量测量功能。
例如,在另一个示例中,如图6所示,该AR设备1包括:处理器101和机器可读存储介质102,还可以包括非易失性介质103、通信接口104和总线105。例如,机器可读存储介质102、处理器101、非易失性介质103和通信接口104通过总线105完成相互间的通信。处理器101通过读取并执行机器可读存储介质102中与几何量测量方法的控制逻辑对应的机器可执行指令。
例如,该通信接口104与通信装置(图中未示出)连接。该通信装置可以通过无线通信来与网络和其他设备进行通信,该网络例如为因特网、内部网和/或诸如蜂窝电话网络之类的无线网络、无线局域网(LAN)和/或城域网(MAN)。无线通信可以使用多种通信标准、协议和技术中的任何一种,包括但不局限于全球移动通信系统(GSM)、增强型数据GSM环境(EDGE)、宽带码分多址(W-CDMA)、码分多址(CDMA)、时分多址(TDMA)、蓝牙、Wi-Fi(例如基于IEEE 802.11a、IEEE 802.11b、IEEE 802.11g和/或IEEE 802.11n标准)、基于因特网协议的语音传输(VoIP)、Wi-MAX,用于电子邮件、即时消息传递和/或短消息服务(SMS)的协议,或任何其他合适的通信协议。
本公开实施例中提到的机器可读存储介质可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、任何类型的存储盘(如光盘、dvd等),或者类似的存储介质,或者它们的组合。
非易失性介质103可以是非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、任何类型的存储盘(如光盘、dvd等),或者类似的非易失性存储介质,或者它们的组合。
需要说明的是,为表示清楚、简洁,本公开实施例并没有给出该AR设备1的全部组成单元。为实现AR设备1的必要功能,本领域技术人员可以根据具体需要提供、设置其他未示出的组成单元,本公开的实施例对此不作 限制。
本公开实施例提供的增强现实设备,可以确定目标物体上的期望位置与用户之间的距离,该距离可供用户了解自身距离目标物体上某个位置的距离,因此,扩展了AR设备的功能,增加了AR设备的用户体验。
本公开一实施例还提供一种存储介质。例如,如图7所示,该存储介质400非暂时性地存储计算机可读指令401,当非暂时性存储的计算机可读指令401由计算机(包括处理器)执行时可以执行本公开任一实施例提供的几何量测量方法。
例如,该存储介质可以是一个或多个计算机可读存储介质的任意组合,例如一个计算机可读存储介质包含获取左右眼视线的计算机可读的程序代码,另一个计算机可读存储介质包含确定汇聚点的计算机可读的程序代码。例如,当该程序代码由计算机读取时,计算机可以执行该计算机存储介质中存储的程序代码,执行例如本公开任一实施例提供的几何量测量方法。
例如,存储介质可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、闪存、或者上述存储介质的任意组合,也可以为其他适用的存储介质。
有以下几点需要说明:
(1)本公开实施例附图只涉及到与本公开实施例涉及到的结构,其他结构可参考通常设计。
(2)在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合以得到新的实施例。
以上所述仅是本公开的示范性实施方式,而非用于限制本公开的保护范围,本公开的保护范围由所附的权利要求确定。

Claims (19)

  1. 一种几何量测量方法,包括:
    获取用户注视目标物体时的左右眼部图像,并基于获取到的所述用户注视所述目标物体时的左右眼部图像确定左右眼视线;
    根据所述左右眼视线确定所述左右眼视线的汇聚点;
    当所述汇聚点与所述目标物体上的期望位置重合时,计算所述目标物体的几何参数。
  2. 根据权利要求1所述的方法,还包括:
    当所述汇聚点的停留时间大于预设阈值时,标记所述汇聚点的位置。
  3. 根据权利要求2所述的方法,其中,当所述标记的汇聚点的位置与所述目标物体上的期望位置重合时,计算所述目标物体的几何参数。
  4. 根据权利要求1-3任一所述的方法,其中,基于获取到的所述用户注视所述目标物体时的左右眼部图像确定左右眼视线,包括:
    基于获取到的所述用户注视所述目标物体时的左右眼部图像确定左右眼瞳孔位置和左右眼的中心位置;
    根据所述左右眼瞳孔位置和所述左右眼的中心位置确定所述左右眼视线。
  5. 根据权利要求4所述的方法,其中,计算所述目标物体的几何参数包括:
    当所述汇聚点的位置与所述目标物体上的期望位置重合时,计算所述汇聚点的位置与所述用户之间的距离;
    根据计算出的多个汇聚点的位置与所述用户之间的多个距离以及所述用户注视所述目标物体上的不同期望位置时的视线偏转角度,确定所述目标物体的几何参数。
  6. 根据权利要求5所述的方法,其中,计算所述汇聚点的位置与所述用户之间的距离,包括:
    根据所述左右眼视线与经过所述左右眼的中心位置的直线之间的夹角的余角以及所述左右眼的中心位置的直线距离计算所述汇聚点的位置与所述用户之间的距离。
  7. 根据权利要求5或6所述的方法,其中,所述多个汇聚点的位置与所述用户之间的多个距离包括与所述目标物体上的第一边缘上的期望位置重合的汇聚点的位置与所述用户之间的第一距离,和与所述目标物体上的第二边缘上的期望位置重合的汇聚点的位置与所述用户之间的第二距离;
    其中,所述第一边缘是所述目标物体上与所述第二边缘相对设置的边缘。
  8. 根据权利要求5-7任一所述的方法,其中,所述用户注视所述目标物体上的不同期望位置时的视线偏转角度为所述第一距离与所述第二距离之间的角度。
  9. 根据权利要求1-8任一所述的方法,其中,所述目标物体的几何参数包括:所述目标物体的高度、所述目标物体的宽度或所述目标物体的厚度等。
  10. 根据权利要求2-9任一所述的方法,其中,在接收到用于指示所述标记的汇聚点的位置与所述目标物体上的期望位置重合的指令时,计算所述目标物体的几何参数。
  11. 根据权利要求1-9任一所述的方法,其中,所述几何量测量方法用于增强现实设备。
  12. 一种几何量测量装置,包括:
    视线确定单元,配置为获取用户注视目标物体时的左右眼部图像,并基于获取到的所述用户注视所述目标物体时的左右眼部图像确定左右眼视线;
    汇聚点确定单元,配置为根据所述左右眼视线确定所述左右眼视线的汇聚点;
    几何参数计算单元,配置为当所述汇聚点与目标物体上的期望位置重合时,计算所述目标物体的几何参数。
  13. 根据权利要求12所述的装置,还包括:
    汇聚点的位置标记单元,配置为当所述汇聚点的停留时间大于预设阈值时,标记所述汇聚点的位置;
    其中,所述几何参数计算单元配置为当所述标记的汇聚点的位置与所述目标物体上的期望位置重合时,计算所述目标物体的几何参数。
  14. 根据权利要求12所述的装置,其中,所述视线确定单元包括:
    瞳孔位置和左右眼的中心位置确定子单元,配置为基于获取到的用户注视目标物体时的左右眼部图像确定左右眼瞳孔位置和左右眼的中心位置;
    视线获取子单元,配置为根据所述左右眼瞳孔位置和所述左右眼的中心位置确定左右眼视线。
  15. 根据权利要求12-14任一项所述的装置,其中,所述几何参数计算单元包括:
    距离计算子单元,配置为当所述标记的汇聚点的位置与所述目标物体上的期望位置重合时,计算所述汇聚点的位置与所述用户之间的距离;
    几何参数计算子单元,配置为根据计算出的多个汇聚点的位置与所述用户之间的多个距离以及所述用户注视目标物体上的不同期望位置时的视线偏转角度,确定所述目标物体的几何参数。
  16. 根据权利要求15所述的装置,其中,所述距离计算子单元配置为根据所述左右眼视线与经过所述左右眼的中心位置的直线之间的夹角的余角以及所述左右眼的中心位置的直线距离计算所述汇聚点的位置与所述用户之间的距离。
  17. 一种几何量测量装置,包括:
    处理器;
    机器可读存储介质,存储有一个或多个计算机程序模块;
    其中,所述一个或多个计算机程序模块被存储在所述机器可读存储介质中并被配置为由所述处理器执行,所述一个或多个计算机程序模块包括用于执行实现权利要求1-11任一所述的几何量测量方法的指令。
  18. 一种增强现实设备,包括如权利要求12-17任一所述的几何量测量装置。
  19. 一种存储介质,非暂时性地存储计算机可读指令,当所述非暂时性存储的计算机可读指令由计算机执行时可以执行根据权利要求1-11任一所述的几何量测量方法的指令。
PCT/CN2019/084110 2018-04-28 2019-04-24 几何量测量方法及其装置、增强现实设备和存储介质 WO2019206187A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/498,822 US11385710B2 (en) 2018-04-28 2019-04-24 Geometric parameter measurement method and device thereof, augmented reality device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810401274.XA CN108592865A (zh) 2018-04-28 2018-04-28 基于ar设备的几何量测量方法及其装置、ar设备
CN201810401274.X 2018-04-28

Publications (1)

Publication Number Publication Date
WO2019206187A1 true WO2019206187A1 (zh) 2019-10-31

Family

ID=63619150

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/084110 WO2019206187A1 (zh) 2018-04-28 2019-04-24 几何量测量方法及其装置、增强现实设备和存储介质

Country Status (3)

Country Link
US (1) US11385710B2 (zh)
CN (1) CN108592865A (zh)
WO (1) WO2019206187A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108592865A (zh) * 2018-04-28 2018-09-28 京东方科技集团股份有限公司 基于ar设备的几何量测量方法及其装置、ar设备
CN112083795A (zh) * 2019-06-12 2020-12-15 北京迈格威科技有限公司 对象控制方法及装置、存储介质和电子设备
CN111309144B (zh) * 2020-01-20 2022-02-01 北京津发科技股份有限公司 三维空间内注视行为的识别方法、装置及存储介质
TWI790640B (zh) * 2021-06-11 2023-01-21 宏碁股份有限公司 擴增實境顯示裝置與方法
CN115525139A (zh) * 2021-06-24 2022-12-27 北京有竹居网络技术有限公司 在头戴式显示设备中获取注视目标的方法及装置
CN114903424A (zh) * 2022-05-31 2022-08-16 上海商汤临港智能科技有限公司 眼睛类型检测方法及装置、计算机设备、存储介质
CN115546214B (zh) * 2022-12-01 2023-03-28 广州视景医疗软件有限公司 一种基于神经网络的集合近点测量方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102830793A (zh) * 2011-06-16 2012-12-19 北京三星通信技术研究有限公司 视线跟踪方法和设备
CN107111381A (zh) * 2015-11-27 2017-08-29 Fove股份有限公司 视线检测系统、凝视点确认方法以及凝视点确认程序
CN107657235A (zh) * 2017-09-28 2018-02-02 北京小米移动软件有限公司 基于增强现实的识别方法及装置
CN107884930A (zh) * 2016-09-30 2018-04-06 宏达国际电子股份有限公司 头戴式装置及控制方法
CN108592865A (zh) * 2018-04-28 2018-09-28 京东方科技集团股份有限公司 基于ar设备的几何量测量方法及其装置、ar设备

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3771964B2 (ja) * 1996-03-12 2006-05-10 オリンパス株式会社 立体映像ディスプレイ装置
JP2000013818A (ja) * 1998-06-23 2000-01-14 Nec Corp 立体表示装置及び立体表示方法
US20060250322A1 (en) * 2005-05-09 2006-11-09 Optics 1, Inc. Dynamic vergence and focus control for head-mounted displays
EP2042079B1 (en) * 2006-07-14 2010-10-20 Panasonic Corporation Visual axis direction detection device and visual line direction detection method
US20100321482A1 (en) * 2009-06-17 2010-12-23 Lc Technologies Inc. Eye/head controls for camera pointing
US20110075257A1 (en) * 2009-09-14 2011-03-31 The Arizona Board Of Regents On Behalf Of The University Of Arizona 3-Dimensional electro-optical see-through displays
US8576276B2 (en) * 2010-11-18 2013-11-05 Microsoft Corporation Head-mounted display device which provides surround video
US9255813B2 (en) * 2011-10-14 2016-02-09 Microsoft Technology Licensing, Llc User controlled real object disappearance in a mixed reality display
US8611015B2 (en) * 2011-11-22 2013-12-17 Google Inc. User interface
CN103256917B (zh) * 2012-02-15 2017-12-12 赛恩倍吉科技顾问(深圳)有限公司 可应用于测距的立体视觉系统
US20130241805A1 (en) * 2012-03-15 2013-09-19 Google Inc. Using Convergence Angle to Select Among Different UI Elements
IL219907A (en) 2012-05-21 2017-08-31 Lumus Ltd Integrated head display system with eye tracking
WO2014033306A1 (en) * 2012-09-03 2014-03-06 SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH Head mounted system and method to compute and render a stream of digital images using a head mounted system
US9239460B2 (en) * 2013-05-10 2016-01-19 Microsoft Technology Licensing, Llc Calibration of eye location
US10198865B2 (en) * 2014-07-10 2019-02-05 Seiko Epson Corporation HMD calibration with direct geometric modeling
CA2957766C (en) * 2014-08-10 2023-10-17 Autonomix Medical, Inc. Ans assessment systems, kits, and methods
CN105866949B (zh) * 2015-01-21 2018-08-17 成都理想境界科技有限公司 能自动调节景深的双目ar头戴设备及景深调节方法
CN105872527A (zh) * 2015-01-21 2016-08-17 成都理想境界科技有限公司 双目ar头戴显示设备及其信息显示方法
CN107588730A (zh) * 2017-09-11 2018-01-16 上海闻泰电子科技有限公司 利用ar设备测量高度的方法及装置
CA3075096A1 (en) * 2017-09-21 2019-03-28 Magic Leap, Inc. Augmented reality display with waveguide configured to capture images of eye and/or environment
CA3084169A1 (en) * 2017-12-14 2019-06-20 Magic Leap, Inc. Contextual-based rendering of virtual avatars

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102830793A (zh) * 2011-06-16 2012-12-19 北京三星通信技术研究有限公司 视线跟踪方法和设备
CN107111381A (zh) * 2015-11-27 2017-08-29 Fove股份有限公司 视线检测系统、凝视点确认方法以及凝视点确认程序
CN107884930A (zh) * 2016-09-30 2018-04-06 宏达国际电子股份有限公司 头戴式装置及控制方法
CN107657235A (zh) * 2017-09-28 2018-02-02 北京小米移动软件有限公司 基于增强现实的识别方法及装置
CN108592865A (zh) * 2018-04-28 2018-09-28 京东方科技集团股份有限公司 基于ar设备的几何量测量方法及其装置、ar设备

Also Published As

Publication number Publication date
US11385710B2 (en) 2022-07-12
CN108592865A (zh) 2018-09-28
US20210357024A1 (en) 2021-11-18

Similar Documents

Publication Publication Date Title
WO2019206187A1 (zh) 几何量测量方法及其装置、增强现实设备和存储介质
US10990803B2 (en) Key point positioning method, terminal, and computer storage medium
US11749025B2 (en) Eye pose identification using eye features
US20210209851A1 (en) Face model creation
US11587297B2 (en) Virtual content generation
CN107111753B (zh) 用于注视跟踪模型的注视检测偏移
US11227158B2 (en) Detailed eye shape model for robust biometric applications
US11715231B2 (en) Head pose estimation from local eye region
US20220301217A1 (en) Eye tracking latency enhancements
US11693475B2 (en) User recognition and gaze tracking in a video system
US10319086B2 (en) Method for processing image and electronic device supporting the same
US11163995B2 (en) User recognition and gaze tracking in a video system
JP2016512765A (ja) 軸上視線追跡システム及び方法
WO2019045750A1 (en) DETAILED EYE SHAPE MODEL FOR ROBUST BIOMETRIC APPLICATIONS
WO2012137801A1 (ja) 入力装置及び入力方法並びにコンピュータプログラム
JP2018205819A (ja) 注視位置検出用コンピュータプログラム、注視位置検出装置及び注視位置検出方法
JPWO2018220963A1 (ja) 情報処理装置、情報処理方法、及びプログラム
JP2018120299A (ja) 視線検出用コンピュータプログラム、視線検出装置及び視線検出方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19793757

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19793757

Country of ref document: EP

Kind code of ref document: A1