WO2021227402A1 - 一种图像显示方法、ar眼镜及存储介质 - Google Patents

一种图像显示方法、ar眼镜及存储介质 Download PDF

Info

Publication number
WO2021227402A1
WO2021227402A1 PCT/CN2020/127358 CN2020127358W WO2021227402A1 WO 2021227402 A1 WO2021227402 A1 WO 2021227402A1 CN 2020127358 W CN2020127358 W CN 2020127358W WO 2021227402 A1 WO2021227402 A1 WO 2021227402A1
Authority
WO
WIPO (PCT)
Prior art keywords
glasses
area
camera
target
image
Prior art date
Application number
PCT/CN2020/127358
Other languages
English (en)
French (fr)
Inventor
马婉婉
姜滨
迟小羽
Original Assignee
歌尔股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔股份有限公司 filed Critical 歌尔股份有限公司
Priority to US17/996,459 priority Critical patent/US11835726B2/en
Publication of WO2021227402A1 publication Critical patent/WO2021227402A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0176Head mounted characterised by mechanical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0149Head-up displays characterised by mechanical features
    • G02B2027/0154Head-up displays characterised by mechanical features with movable elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • This application relates to the technical field of smart wearable devices, in particular to an image display method, an AR glasses, and a storage medium.
  • AR Augmented Reality
  • AR Augmented Reality
  • AR glasses using AR technology have been widely used in games, entertainment, medical care, shopping, and education.
  • AR glasses in related technologies focus on displaying pre-set virtual scenes, such as roller coasters, racing tracks, etc., and usually only take the scope of AR glasses. Recognizing objects in the center area of the AR glasses often ignores objects in the edge area of the AR glasses shooting range, and the accuracy of object recognition is low.
  • the purpose of this application is to provide an image display method, an AR glasses and a storage medium, which can improve the object recognition accuracy of the AR glasses.
  • the present application provides an image display method applied to AR glasses.
  • the AR glasses are provided with a first camera on the inner frame of the glasses, and the AR glasses are provided with a second camera on the outer frame of the glasses.
  • Image display methods include:
  • an image including the position information of the target object is displayed through the AR glasses lens.
  • the method further includes:
  • the area corresponding to the target image is set as the first type area, and the area map is updated; wherein the area map includes the first type area and the second type area, and the first type area does not include the target object
  • the area of the second type is an area including the target object
  • the updated area map is displayed through the AR glasses lens.
  • the method further includes:
  • the degree of completion of the work is calculated according to the area ratio of the first-type area and the second-type area in the updated area map, and the degree of completion of the work is displayed on the AR glasses lens.
  • it also includes:
  • the safety device includes any one or any of a seat belt detection device, a blood pressure monitoring device, and a heart rate detection device The combination of items.
  • the method further includes:
  • acquiring the target image collected after the second camera adjusts the shooting angle includes:
  • the AR glasses include an acceleration sensor for detecting the tilt angle of the head;
  • the image display method further includes:
  • a preset picture with preset transparency is displayed in the target lens area.
  • This application also provides an AR glasses, including:
  • the first camera arranged on the inner frame of the glasses of the AR glasses is used to take pictures of the eyes of the wearer of the glasses;
  • the second camera arranged on the outer frame of the AR glasses is used to collect a target image
  • the processors respectively connected to the first camera and the second camera are configured to determine the eyeball rotation information of the glasses wearer according to the eye picture; and are also configured to adjust the second camera according to the eyeball rotation information.
  • the shooting angle of the second camera, and acquiring the target image collected after the second camera adjusts the shooting angle; it is also used to determine whether there is a target object in the target image; if it is, the AR glasses lens displays the target object Image of location information.
  • it also includes:
  • a communication device respectively connected to the processor and the safety device is used to send safety monitoring information transmitted by the safety device to the processor; wherein the safety device includes a seat belt detection device, a blood pressure monitoring device, and a heart rate detection device Any one of them or a combination of any of them;
  • the processor is further configured to display the safety monitoring information through the AR glasses lens.
  • the present application also provides a storage medium on which a computer program is stored, and when the computer program is executed, the steps performed by the above-mentioned image display method are realized.
  • the present application provides an image display method applied to AR glasses, the AR glasses are provided with a first camera on the inner frame of the glasses, and the AR glasses are provided with a second camera on the outer frame of the glasses, and the image display method includes: Use the first camera to detect the eyeball rotation information of the wearer of the glasses; adjust the shooting angle of the second camera according to the eyeball rotation information, and obtain the target image collected after the second camera adjusts the shooting angle; determine the Whether there is a target object in the target image; if so, an image including the position information of the target object is displayed through the AR glasses lens.
  • the AR glasses used in this application include a first camera for detecting eye rotation information, and a second camera for collecting target images.
  • the present application adjusts the shooting angle of the second camera according to the eyeball rotation information of the wearer of the glasses, and then collects the corresponding target image at the shooting angle. If the target image includes the target object, the position information of the target object is displayed through the AR glasses lens.
  • This application can dynamically adjust the shooting angle of the second camera according to the eyeball rotation information implemented by the glasses wearer, and obtain the same target image as the observation area of the glasses wearer, so as to determine whether there is a target object in the area currently observed by the user.
  • the solution provided by the present application can dynamically adjust the shooting angle of the detected target object according to the rotation of the user's perspective, thereby improving the object recognition accuracy of the AR glasses.
  • This application also provides an AR glasses and a storage medium, which have the above-mentioned beneficial effects, which will not be repeated here.
  • FIG. 1 is a flowchart of an image display method provided by an embodiment of the application
  • FIG. 2 is a schematic diagram of the appearance of an AR glasses provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of the target object recognition principle of AR glasses for high-altitude operations provided by an embodiment of the application;
  • FIG. 4 is a schematic flowchart of a scanning process of AR glasses for high-altitude operations provided by an embodiment of the application.
  • FIG. 1 is a flowchart of an image display method provided by an embodiment of the application.
  • S101 Use the first camera to detect eyeball rotation information of the wearer of the glasses
  • this embodiment can be applied to AR glasses.
  • the AR glasses can include a frame and spectacle lenses.
  • the AR glasses can also be assembled through a corresponding head-mounted device to obtain an AR helmet.
  • the spectacle frame of the AR glasses applied in this embodiment may include an inner spectacle frame facing the eyes of the wearer, and may also include an outer spectacle frame facing the outside.
  • the spectacle inner frame of the AR glasses is provided with a second frame.
  • a camera. The second camera is arranged on the outer frame of the AR glasses. The positions and the number of the first camera and the second camera in the inner frame of the glasses and the outer frame of the glasses are not limited here.
  • the first camera is used to detect the eyeball rotation information of the wearer of the glasses.
  • the eyeball rotation information may include movement changes of the eyeball, such as turning left by 30 degrees horizontally, and turning downward by 15 degrees vertically.
  • the operation of obtaining eye rotation information in this embodiment may include: determining the position of the pupil on the eye at the current moment, delaying a preset time to determine the position of the pupil on the eye at the next moment, and according to the current moment The change in the position of the pupil in the eye at the next moment determines the eye rotation information.
  • the inner frame of the glasses usually includes two parts: the inner frame of the left glasses and the inner frame of the right glasses.
  • a second inner frame of the left glasses and the inner frame of the right glasses may be provided.
  • a camera, the first camera set on the inner frame of the glasses on the left is used to detect the left eye rotation information of the glasses wearer, and the first camera set on the inner frame of the right glasses is used to detect the right eye rotation information of the glasses wearer, and finally
  • the left eyeball rotation information and the right eyeball rotation information are averaged to obtain the eyeball rotation information determined in this step.
  • the position of the first camera is set in the inner frame of the left glasses and the second camera is set in the inner frame of the right glasses.
  • the position of is symmetrical about the central axis of the glasses, and the central axis is the line dividing the AR glasses into the left half of the glasses and the right half of the glasses.
  • S102 Adjust the shooting angle of the second camera according to the eyeball rotation information, and obtain a target image collected after the second camera adjusts the shooting angle;
  • the second camera is a camera arranged on the outer frame of the glasses, and the second camera can take pictures for target object detection.
  • this embodiment adjusts the shooting angle of the second camera according to the eyeball rotation information, so that the second camera is used to shoot the target image after the shooting angle is adjusted.
  • the process of adjusting the shooting angle of the second camera in this embodiment may include: determining the information to be adjusted according to the eyeball rotation information, and adjusting the shooting angle of the second camera according to the information to be adjusted.
  • the corresponding relationship between the eyeball rotation information and the information to be adjusted can be preset.
  • the information to be adjusted can be determined according to the corresponding relationship between the eyeball rotation information and the information to be adjusted.
  • the information to be adjusted includes the target direction and the target angle. , Control the second camera to move the target angle in the target direction.
  • the correspondence between the eyeball rotation information and the information to be adjusted is: the rotation direction of the eyeball rotation information is consistent with the rotation direction of the information to be adjusted, and the rotation angle of the eyeball rotation information multiplied by the correlation coefficient (such as 0.98) is equal to the information to be adjusted, if S101
  • the second camera can be controlled to rotate 19.6 degrees to the right, so as to obtain the target image collected after the second camera is rotated 19.6 degrees to the right.
  • the second camera in this embodiment may always have an operation of collecting images at a preset frequency.
  • the image collected after adjusting the shooting angle of the second camera is set as the target image.
  • S103 Determine whether there is a target object in the target image; if yes, proceed to S104; if not, end the process;
  • this embodiment can perform an image recognition operation on the target image to determine whether there is a target object in the target image, and this embodiment does not limit the type of the target object.
  • the target objects may include mineral water bottles, newspapers, fallen leaves, and the like.
  • the process of judging whether there is a target object in the target image in this embodiment may include: performing an image recognition operation on the target image to obtain an image recognition result; wherein the image recognition result includes the position of the target object in the target image and The quantity information of the target object; if the number of target objects is 0, it means that there is no target object in the target image; if the number of target objects is not 0, it means that there is a target object in the target image, and the relevant operation of S104 can be entered.
  • S104 Display the image including the position information of the target object through the AR glasses lens.
  • this step is based on determining that the target image includes the target object, calculates the position information of the target object according to the position of the target object in the target image, generates an image including the position information, and displays the target object including the target object through the AR glasses lens
  • the image of the location information in order to indicate the orientation of the target object of the wearer of the glasses.
  • this embodiment can display an image including the position information of all target objects through the AR glasses lens.
  • the AR glasses used in this embodiment include a first camera for detecting eyeball rotation information, and a second camera for collecting target images.
  • the shooting angle of the second camera is adjusted according to the eyeball rotation information of the wearer of the glasses, and the corresponding target image is collected at the shooting angle. If the target image includes the target object, the position information of the target object is displayed through the AR glasses lens.
  • the shooting angle of the second camera can be dynamically adjusted according to the eyeball rotation information implemented by the glasses wearer, and the target image that is the same as the observation area of the glasses wearer can be captured, so as to determine whether there is a target object in the area currently observed by the user.
  • the solution provided by this embodiment can dynamically adjust the shooting angle of the detected target object according to the rotation of the user's perspective, which improves the object recognition accuracy of the AR glasses.
  • this embodiment may also set the area corresponding to the target image as the first type of area, and update the area map;
  • the updated area map is displayed through the AR glasses lens.
  • the area map includes a first type area and a second type area, the first type area is an area that does not include the target object, and the second type area is an area that includes the target object.
  • the target object to be identified is dusty glass.
  • the belt can be automatically identified There is dusty glass (that is, the glass that needs to be wiped), and after the staff wipes the glass clean, the wiped glass area is automatically set as the first type of area, and the unwiped glass area is set as the second type of area.
  • the staff can determine the area that needs to work according to the area map.
  • the working area of high-altitude workers is large, and it is difficult for the human eye to scan the working area.
  • the AR glasses of this embodiment can display an area map and mark the completed area and the unfinished area.
  • the communication system of the AR glasses will send a completion signal to avoid area omission.
  • This embodiment can perform attendance recording according to the length completion signal sent by the communication system of the AR glasses, in turn Assess the completion degree of the operator's work.
  • this embodiment can also send the required data to the relevant server.
  • the sent data information can include the information of the wearer of the glasses, the scanned area, the number of targets in the area, etc., and these data can be analyzed.
  • the key prevention areas can be highlighted. For example, if too much garbage is detected in a certain place during a certain period of time, the area can be analyzed and treated in a focused manner.
  • the updated area map may also be based on the difference between the first type of area and the second type of area.
  • the area area ratio calculates the degree of completion of the work, and displays the degree of completion of the work to the AR glasses lens.
  • the AR glasses mentioned in this embodiment can also be connected to a variety of security devices by wired or wireless connection, so that the AR glasses can receive the security monitoring information transmitted by the security device and pass all
  • the AR glasses lens displays the safety monitoring information; wherein, the safety device includes any one or a combination of any of a seat belt detection device, a blood pressure monitoring device, and a heart rate detection device.
  • the seat belt detection device is a device used to detect whether the seat belt buckle is in place.
  • an image recognition operation may be performed on the target image, and the target image may be determined according to the result of the image recognition
  • the regional terrain of the corresponding area if the regional terrain is dangerous terrain, corresponding prompt information is generated.
  • the second camera mentioned in this embodiment is a depth camera, which can be combined with the depth information in the target image to perform an area terrain recognition operation.
  • S102 may obtain multiple target images collected after the second camera adjusts the shooting angle; determine the common shooting area of all target images; The common shooting area in the target image performs a similarity comparison operation; according to the similarity comparison result, it is determined whether there is a falling object in the public shooting area; if so, corresponding alarm information is generated.
  • the common shooting area of all target images is the overlapping area in the shooting areas corresponding to all the target images. Since there is a certain shooting time interval between the target images, this embodiment can sequentially take the common shooting of the target images in the order of shooting. The similarity comparison of the images corresponding to the regions can determine whether there are objects moving downwards during the shooting time interval.
  • this embodiment may sequentially perform similarity comparison operations on the common shooting areas in the adjacent target images according to the shooting order, and determine that there are objects with position changes in the adjacent target images according to the similarity comparison results.
  • the similarity comparison results When there is the same object moving downward in N consecutive target images, it is determined that there is a falling object in the common shooting area, and corresponding alarm information can be generated at this time.
  • the operator by detecting falling objects from a high altitude, the operator can avoid danger due to blind spots in the line of sight of the operator, and improve safety.
  • the AR glasses mentioned in the embodiment corresponding to FIG. 1 may include an acceleration sensor for detecting the tilt angle of the head.
  • the AR glasses may calculate the visual area of the glasses wearer according to the head tilt angle and the eyeball rotation information; set the area in the visual area whose angle of view is less than a preset value as the area to be blocked; determine the In the AR glasses lens, the target lens area through which the eye of the glasses wearer observes the area to be blocked; a preset picture with a preset transparency is displayed in the target lens area.
  • the AR glasses mentioned in the above embodiments can be used in the process of high-altitude operations.
  • the visible area is the area that the glasses wearer can see under the current viewing angle.
  • the area in the visible area whose angle of view is less than the preset value is set as the area to be blocked.
  • the above angle of view refers to the angle between the user's pupil and a certain point on the boundary of the visible area and the horizontal line.
  • This embodiment further determines the target lens area through which the eye of the glasses wearer in the AR glasses lens observes the area to be blocked, and displays a preset picture with a preset transparency in the target lens area, which can avoid the glasses The wearer sees the environment below and overcomes the fear of heights.
  • FIG. 2 is a schematic diagram of the appearance of an AR glasses provided by an embodiment of the application.
  • the AR glasses may include:
  • the first camera arranged on the inner frame of the glasses of the AR glasses is used to take pictures of the eyes of the wearer of the glasses;
  • the second camera arranged on the outer frame of the AR glasses is used to collect a target image
  • the processors respectively connected to the first camera and the second camera are configured to determine the eyeball rotation information of the glasses wearer according to the eye picture; and are also configured to adjust the second camera according to the eyeball rotation information.
  • the shooting angle of the second camera, and acquiring the target image collected after the second camera adjusts the shooting angle; it is also used to determine whether there is a target object in the target image; if it is, the AR glasses lens displays the target object Image of location information.
  • the AR glasses used in this embodiment include a first camera for detecting eyeball rotation information, and a second camera for collecting target images.
  • the shooting angle of the second camera is adjusted according to the eyeball rotation information of the wearer of the glasses, and then the corresponding target image is collected at the shooting angle. If the target image includes the target object, the position information of the target object is displayed through the AR glasses lens.
  • the shooting angle of the second camera can be dynamically adjusted according to the eyeball rotation information implemented by the glasses wearer, and the target image that is the same as the observation area of the glasses wearer can be captured, so as to determine whether there is a target object in the area currently observed by the user.
  • the solution provided by this embodiment can dynamically adjust the shooting angle of the detected target object according to the rotation of the user's perspective, which improves the object recognition accuracy of the AR glasses.
  • the AR glasses also include:
  • a communication device respectively connected to the processor and the safety device is used to send safety monitoring information transmitted by the safety device to the processor; wherein the safety device includes a seat belt detection device, a blood pressure monitoring device, and a heart rate detection device Any one of them or a combination of any of them;
  • the processor is further configured to display the safety monitoring information through the AR glasses lens.
  • the AR glasses further include:
  • the map update module is used to set the area corresponding to the target image as the first type area and update the area map; wherein the area map includes the first type area and the second type area, and the first type area is An area that does not include the target object, and the second type of area is an area that includes the target object; an updated area map is displayed through the AR glasses lens.
  • the work progress calculation module is configured to calculate the work completion degree according to the area ratio of the first type area and the second type area in the updated area map, and display the work completion degree to the AR glasses lens .
  • the terrain recognition module is configured to perform an image recognition operation on the target image after acquiring the target image collected by the second camera after adjusting the shooting angle, and determine the regional topography of the area corresponding to the target image according to the image recognition result; When the terrain of the area is dangerous, the corresponding prompt information is generated.
  • the processor is also used to obtain a plurality of the target images collected after the second camera adjusts the shooting angle; is also used to determine the common shooting area of all the target images; and is also used to sequentially compare the images according to the shooting order.
  • the common shooting area in the adjacent target image performs a similarity comparison operation; it is also used for judging whether there is a falling object in the common shooting area according to the similarity comparison result; if it is, the corresponding alarm information is generated.
  • the AR glasses include an acceleration sensor for detecting the tilt angle of the head
  • the processor is further configured to calculate the viewing area of the glasses wearer according to the head tilt angle and the eyeball rotation information; Set as the area to be blocked; used to determine the target lens area through which the eye of the glasses wearer in the AR glasses lens observes the area to be blocked; used to display a preset transparency in the target lens area picture.
  • FIG. 3 is a schematic diagram of the target object recognition principle of AR glasses for aerial work according to an embodiment of this application
  • FIG. 4 is a scanning process of AR glasses for aerial work according to an embodiment of this application.
  • the target in Figure 3 and Figure 4 is the target object.
  • two outer cameras are placed outside the AR glasses.
  • the outer camera focuses on scanning the turned area; the outer camera adjusts the camera to the corresponding direction according to the movement of the human eye captured by the inner camera Direction scan.
  • the outer camera scans the target (such as plastic garbage)
  • the two outer cameras can accurately locate the target, and the target positioning information will be displayed on the glasses lens in the form of text or pictures, and the voice broadcast module in the AR glasses Play it out.
  • the area map that needs to be scanned can be displayed on the AR glasses, and the common area map software package can also be downloaded to the AR glasses. You can select one of the glasses lens to display the current scan area, scanned area and unscanned area.
  • the AR glasses can upload data such as the number of targets in the area and the information of the staff in the area to the server. The data uploaded to the server can be used to analyze the target, and can also be used as a staff attendance standard.
  • An inner camera is arranged inside the frame of the AR glasses, and the inner camera can capture the eye movements of the wearer.
  • the external camera can be controlled to scan the corresponding area according to the direction of the wearer's eyeball rotation.
  • the actual number of target objects obtained can also be recorded according to the blinking and blinking of the wearer's eyes.
  • the AR glasses can also have built-in acceleration sensors. When the wearer keeps his head upright, he can observe where the eyes can see. Some pictures appear on the lens of AR glasses to block the visual field below to prevent dizziness and other symptoms of fear of heights.
  • the occlusion mode can be selected according to requirements. When the user does not need to use it, the shielding mode can be turned off, and the shielding mode can be turned on when the user needs it.
  • the AR glasses provided in this embodiment can autonomously find and set target objects according to the wearer’s purpose, and display the target objects on the lenses through images, voices, texts, etc. Garbage) work efficiency; this embodiment can also switch between the transparent mode and the shielding mode according to the needs of the wearer, so that people with special symptoms can also adapt to this high-risk work.
  • the AR glasses can also report data to the server, which is conducive to data analysis.
  • the present application also provides a storage medium on which a computer program is stored, and when the computer program is executed, the steps provided in the above-mentioned embodiments can be implemented.
  • the storage medium may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other various media that can store program code.
  • the steps of the method or algorithm described in the embodiments disclosed in this document can be directly implemented by hardware, a software module executed by a processor, or a combination of the two.
  • the software module can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disks, removable disks, CD-ROMs, or all areas in the technical field. Any other known storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • Eyeglasses (AREA)

Abstract

一种图像显示方法,应用于AR眼镜,AR眼镜的眼镜内框设置有第一摄像头,AR眼镜的眼镜外框设置有第二摄像头,图像显示方法包括:利用第一摄像头检测眼镜佩戴者的眼球转动信息(S101);根据眼球转动信息调整第二摄像头的拍摄角度,并获取第二摄像头调整拍摄角度后采集的目标图像(S102);判断目标图像中是否存在目标物体(S103);若是,则通过AR眼镜镜片显示包括目标物体的位置信息的图像(S104)。提高了AR眼镜的物体识别精度。

Description

一种图像显示方法、AR眼镜及存储介质
本申请要求于2020年5月13日提交中国专利局、申请号为202010403469.5、发明名称为“一种图像显示方法、AR眼镜及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及智能穿戴设备技术领域,特别涉及一种图像显示方法、一种AR眼镜及一种存储介质。
背景技术
AR(Augmented Reality,增强现实)技术是一种将虚拟信息与真实世界巧妙融合的技术,可以将计算机生成的文字、图像、三维模型、音乐、视频等虚拟信息模拟仿真后应用到真实世界中,从而实现对真实世界的“增强”。
采用AR技术的AR眼镜在游戏娱乐、医疗、购物、教育上已经广泛应用,但是相关技术中的AR眼镜重视显示预先设置好的虚拟场景,如过山车、赛车跑道等,通常仅对AR眼镜拍摄范围的中心区域物体进行识别,往往忽略了AR眼镜拍摄范围的边缘区域物体,物体识别精度较低。
因此,如何提高AR眼镜的物体识别精度是本领域技术人员目前需要解决的技术问题。
发明内容
本申请的目的是提供一种图像显示方法、一种AR眼镜及一种存储介质,能够提高AR眼镜的物体识别精度。
为解决上述技术问题,本申请提供一种图像显示方法,应用于AR眼镜,所述AR眼镜的眼镜内框设置有第一摄像头,所述AR眼镜的眼镜外框设置有第二摄像头,所述图像显示方法包括:
利用所述第一摄像头检测眼镜佩戴者的眼球转动信息;
根据所述眼球转动信息调整所述第二摄像头的拍摄角度,并获取所述第二摄像头调整拍摄角度后采集的目标图像;
判断所述目标图像中是否存在目标物体;
若是,则通过AR眼镜镜片显示包括所述目标物体的位置信息的图像。
可选的,若所述目标图像中不存在所述目标物体,还包括:
将所述目标图像对应的区域设置为第一类区域,并更新区域地图;其中,所述区域地图包括第一类区域和第二类区域,所述第一类区域为不包括所述目标物体的区域,所述第二类区域为包括所述目标物体的区域;
通过所述AR眼镜镜片显示更新后的区域地图。
可选的,在所述更新区域地图之后,还包括:
根据所述更新后的区域地图中所述第一类区域和第二类区域的区域面积比例计算工作完成度,并将所述工作完成度显示至所述AR眼镜镜片。
可选的,还包括:
接收安全装置传输的安全监测信息,并通过所述AR眼镜镜片显示所述安全监测信息;其中,所述安全装置包括安全带检测装置、血压监测装置和心率检测装置中的任一项或任几项的组合。
可选的,在获取所述第二摄像头调整拍摄角度后采集的目标图像之后,还包括:
对所述目标图像执行图像识别操作,根据图像识别结果确定所述目标图像对应区域的区域地形;
若所述区域地形为危险地形时,则生成对应的提示信息。
可选的,获取所述第二摄像头调整拍摄角度后采集的目标图像,包括:
获取所述第二摄像头调整拍摄角度后采集的多张所述目标图像;
相应的,还包括:
确定所有所述目标图像的公共拍摄区域;
按照拍摄顺序依次对相邻的所述目标图像中公共拍摄区域执行相似度比对操作;
根据相似度比对结果判断所述公共拍摄区域是否存在高空坠物;
若是,则生成对应的报警信息。
可选的,所述AR眼镜包括用于检测头部倾斜角度的加速度传感器;
相应的,所述图像显示方法还包括:
根据所述头部倾斜角度和所述眼球转动信息计算所述眼镜佩戴者的可视区域;
将所述可视区域中视角小于预设值的区域设置为待遮挡区域;
确定所述AR眼镜镜片中所述眼镜佩戴者观察所述待遮挡区域时视线通过的目标镜片区域;
在所述目标镜片区域中显示预设透明度的预设图片。
本申请还提供了一种AR眼镜,包括:
设置于所述AR眼镜的眼镜内框的第一摄像头,用于拍摄眼镜佩戴者的眼部图片;
设置于所述AR眼镜的眼镜外框的第二摄像头,用于采集目标图像;
分别与所述第一摄像头和所述第二摄像头连接的处理器,用于根据所述眼部图片确定所述眼镜佩戴者的眼球转动信息;还用于根据所述眼球转动信息调整所述第二摄像头的拍摄角度,并获取所述第二摄像头调整拍摄角度后采集的目标图像;还用于判断所述目标图像中是否存在目标物体;若是,则通过AR眼镜镜片显示包括所述目标物体的位置信息的图像。
可选的,还包括:
分别与所述处理器和安全装置连接的通信装置,用于将安全装置传输的安全监测信息发送至所述处理器;其中,所述安全装置包括安全带检测装置、血压监测装置和心率检测装置中的任一项或任几项的组合;
相应的,所述处理器还用于通过所述AR眼镜镜片显示所述安全监测信息。
本申请还提供了一种存储介质,其上存储有计算机程序,所述计算机程序执行时实现上述图像显示方法执行的步骤。
本申请提供了一种图像显示方法,应用于AR眼镜,所述AR眼镜的眼镜内框设置有第一摄像头,所述AR眼镜的眼镜外框设置有第二摄像头,所述图像显示方法包括:利用所述第一摄像头检测眼镜佩戴者的眼球转动信息;根据所述眼球转动信息调整所述第二摄像头的拍摄角度,并获取所述第二摄像头调整拍摄角度后采集的目标图像;判断所述目标图像中是否存在目标物 体;若是,则通过AR眼镜镜片显示包括所述目标物体的位置信息的图像。
本申请所应用的AR眼镜包括用于检测眼球转动信息的第一摄像头,以及采集目标图像的第二摄像头。本申请根据眼镜佩戴者的眼球转动信息调整第二摄像头的拍摄角度,进而在该拍摄角度下采集相应的目标图像。若目标图像中包括目标物体,则通过AR眼镜镜片显示目标物体的位置信息。本申请能够根据眼镜佩戴者实施的眼球转动信息对第二摄像头的拍摄角度进行动态调整,拍摄得到与眼镜佩戴者观察区域相同的目标图像,以便确定用户当前观察的区域中是否存在目标物体。本申请提供的方案能够根据用户的视角转动动态调整检测目标物体的拍摄角度,提高了AR眼镜的物体识别精度。本申请同时还提供了一种AR眼镜和一种存储介质,具有上述有益效果,在此不再赘述。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一部分附图,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本申请实施例所提供的一种图像显示方法的流程图;
图2为本申请实施例所提供的一种AR眼镜的外观示意图;
图3为本申请实施例所提供的一种高空作业AR眼镜的目标物体识别原理示意图;
图4为本申请实施例所提供的一种高空作业AR眼镜的扫描流程提流程图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
下面请参见图1,图1为本申请实施例所提供的一种图像显示方法的流程 图。
具体步骤可以包括:
S101:利用第一摄像头检测眼镜佩戴者的眼球转动信息;
其中,本实施例可以应用于AR眼镜中,该AR眼镜可以包括镜框、眼镜镜片,该AR眼镜还可以通过相应的头戴设备进行组装得到AR头盔。具体的,本实施例所应用于的AR眼镜的镜框可以包括朝向佩戴者眼部的眼镜内框,还可以包括朝向外部的眼镜外框,在本实施例中AR眼镜的眼镜内框设置有第一摄像头,AR眼镜的眼镜外框设置有第二摄像头,此处不限定第一摄像头与第二摄像头在眼镜内框和眼镜外框中的位置及其数量。
本实施例首先利用第一摄像头检测眼镜佩戴者的眼球转动信息,眼球转动信息可以包括眼球的动作变化,如水平左转30度、竖直向下转15度等。作为一种可行的实施方式,本实施例获取眼球转动信息的操作可以包括:确定当前时刻瞳孔在眼部的位置,延时预设时长后确定下一时刻瞳孔在眼部的位置,根据当前时刻与下一时刻瞳孔在眼部位置的变化情况确定眼球转动信息。作为另一种可行的实施方式,眼镜内框通常包括两部分:左侧眼镜内框和右侧眼镜内框,本实施例可以分别在左侧眼镜内框和右侧眼镜内框分别设置一个第一摄像头,设置于左侧眼镜内框的第一摄像头用于检测眼镜佩戴者的左眼球转动信息,设置于右侧眼镜内框的第一摄像头用于检测眼镜佩戴者的右眼球转动信息,最终将左眼球转动信息和右眼球转动信息进行均值计算得到本步骤中所确定的眼球转动信息。进一步的,若使用上述左侧眼镜内框和右侧眼镜内框各设置一个第一摄像头的方案时,左侧眼镜内框中设置第一摄像头的位置与和右侧眼镜内框设置第二摄像头的位置关于眼镜的中轴线对称,所述中轴线为将AR眼镜划分为左半侧眼镜和右半侧眼镜的线。
S102:根据眼球转动信息调整第二摄像头的拍摄角度,并获取第二摄像头调整拍摄角度后采集的目标图像;
其中,第二摄像头为设置于眼镜外框的摄像头,第二摄像头可以拍摄用于进行目标物体检测的图片。在确定了眼球转动信息的基础上,本实施例根据眼球转动信息调整第二摄像头的拍摄角度,以便在调整拍摄角度后利用第二摄像头拍摄目标图像。具体的,本实施例调整第二摄像头的拍摄角度的过程可以包括:根据眼球转动信息确定待调整信息,根据待调整信息调整第二 摄像头的拍摄角度。本实施例可以预先设置眼球转动信息与待调整信息的对应关系,在确定了眼球转动信息后可以根据眼球转动信息与待调整信息的对应关系确定待调整信息,待调整信息包括目标方向与目标角度,控制第二摄像头向目标方向移动目标角度。
例如,眼球转动信息与待调整信息的对应关系为:眼球转动信息的转动方向与待调整信息的转动方向一致,眼球转动信息的转动角度乘以关联系数(如0.98)等于待调整信息,若S101检测的眼球转动信息为:向水平向右转动20度,那么根据眼球转动信息与待调整信息的对应关系得到的待调整信息中目标方向为水平向右转动,待调整信息中目标角度为20*0.98=19.6度,此时可以控制第二摄像头水平向右转动19.6度,以便获取所述第二摄像头水平向右转动19.6度后采集的目标图像。
可以理解的是,本实施例中的第二摄像头可以一直存在按照预设频率采集图像的操作,本实施例将调整第二摄像头调整拍摄角度后采集的图像设置为目标图像。
S103:判断目标图像中是否存在目标物体;若是,则进入S104;若否,则结束流程;
其中,在得到目标图像之后,本实施例可以对目标图像执行图像识别操作判断目标图像中是否存在目标物体,本实施例不限定目标物体的种类。当本实施例应用于环卫工人的工作场景时,目标物体可以包括矿泉水瓶、报纸、落叶等。
作为一种可行的实施方式,本实施例判断目标图像中是否存在目标物体的过程可以包括:对目标图像执行图像识别操作,得到图像识别结果;其中图像识别结果包括目标物体在目标图像中位置和目标物体的数量信息;若目标物体的数量为0则说明目标图像中不存在目标物体;若目标物体的数量不为0则说明目标图像中存在目标物体,可以进入S104的相关操作。
S104:通过AR眼镜镜片显示包括目标物体的位置信息的图像。
其中,本步骤建立在确定目标图像中包括目标物体的基础上,根据目标物体在目标图像中位置计算目标物体的位置信息,生成包括位置信息的图像,并通过AR眼镜镜片显示包括所述目标物体的位置信息的图像,以便指示眼镜佩戴者目标物体所在的方位。作为一种可行的实施方式,本实施例可以通 过AR眼镜镜片显示包括所有目标物体的位置信息的图像。
本实施例所应用的AR眼镜包括用于检测眼球转动信息的第一摄像头,以及采集目标图像的第二摄像头。本实施例根据眼镜佩戴者的眼球转动信息调整第二摄像头的拍摄角度,进而在该拍摄角度下采集相应的目标图像。若目标图像中包括目标物体,则通过AR眼镜镜片显示目标物体的位置信息。本实施例能够根据眼镜佩戴者实施的眼球转动信息对第二摄像头的拍摄角度进行动态调整,拍摄得到与眼镜佩戴者观察区域相同的目标图像,以便确定用户当前观察的区域中是否存在目标物体。本实施例提供的方案能够根据用户的视角转动动态调整检测目标物体的拍摄角度,提高了AR眼镜的物体识别精度。
作为对于图1对应实施例的进一步介绍,若S103中判断目标图像中不存在所述目标物体,本实施例还可以将所述目标图像对应的区域设置为第一类区域,并更新区域地图;通过所述AR眼镜镜片显示更新后的区域地图。其中,所述区域地图包括第一类区域和第二类区域,所述第一类区域为不包括所述目标物体的区域,所述第二类区域为包括所述目标物体的区域。
当本实施例中应用于高空擦玻璃的场景中时,待识别的目标物体为带有灰尘的玻璃,当工作人员佩戴本申请实施例所提供的AR眼镜进行高空擦玻璃时,可以自动识别带有灰尘的玻璃(即需要擦的玻璃),并在工作人员将玻璃擦干净后自动将已擦过的玻璃区域设置为第一类区域,将未擦过的玻璃区域设置为第二类区域,工作人员可以根据区域地图确定需要工作的区域。高空作业人员工作面积大,人眼很难扫描所在工作区域。本实施例的AR眼镜可显示区域地图,并标注出已完成区域和未完成区域。当然,当佩戴者完成所有区域的扫描且无目标物体时,AR眼镜的通信系统会发出完成信号,避免区域遗漏,本实施例可以根据AR眼镜的通信系统会发出长度完成信号进行考勤记录,依次评估作业员工作完成度。
当然,本实施例还可以将所需数据发送到相关服务器上,发送的数据信息可以包括眼镜佩戴者信息、扫描的区域、该区域中目标物的数量等,可对这些数据进行分析。当目标物过多时,可突出重点防范区域。例如某段时间检测到某地垃圾过多,可对该区域进行重点分析并治理。
作为对于图1对应实施例的进一步介绍,在根据第一类区域和第二类区域更新区域地图之后,还可以根据所述更新后的区域地图中所述第一类区域和第二类区域的区域面积比例计算工作完成度,并将所述工作完成度显示至所述AR眼镜镜片。
作为对于图1对应实施例的进一步介绍,本实施例中所提到的AR眼镜还可以与多种安全设备进行有线连接或无线连接,以便AR眼镜接收安全装置传输的安全监测信息,并通过所述AR眼镜镜片显示所述安全监测信息;其中,所述安全装置包括安全带检测装置、血压监测装置和心率检测装置中的任一项或任几项的组合。安全带检测装置为用于检测安全带卡扣是否在位的装置。
作为对于图1对应实施例的进一步介绍,在S102获取所述第二摄像头调整拍摄角度后采集的目标图像之后,还可以对所述目标图像执行图像识别操作,根据图像识别结果确定所述目标图像对应区域的区域地形;若所述区域地形为危险地形时,则生成对应的提示信息。作为一种可行的实施方式,本实施例中所提到的第二摄像头为深度摄像头,可以结合目标图像中的深度信息进行区域地形识别操作。
作为对于图1对应实施例的进一步介绍,S102可以获取所述第二摄像头调整拍摄角度后采集的多张所述目标图像;确定所有目标图像的公共拍摄区域;按照拍摄顺序依次对相邻的所述目标图像中公共拍摄区域执行相似度比对操作;根据相似度比对结果判断所述公共拍摄区域是否存在高空坠物;若是,则生成对应的报警信息。
其中,所有目标图像的公共拍摄区域为所有目标图像对应的拍摄区域中重合的区域,由于目标图像之间存在一定的拍摄时间间隔,因此本实施例可以通过按照拍摄顺序依次对目标图像中公共拍摄区域对应的图像进行相似度比对可以确定在拍摄时间间隔内是否存在向下移动的物体。
具体的,本实施例可以按照拍摄顺序依次对相邻的所述目标图像中公共拍摄区域执行相似度比对操作,根据相似度比对结果确定相邻的目标图像中存在位置变化的物体,若连续N张目标图像中均存在向下移动相同物体时,则判定公共拍摄区域存在高空坠物,此时可以生成对应的报警信息。本实施例通过检测高空坠物,可以避免操作员因视线盲点而出现危险,提高了安全 性。
作为对于图1对应实施例的进一步介绍,可以根据佩戴者头部动作,选择是否启动遮挡模式。在遮挡模式启动后,当佩戴者往下看时镜片会出现一些画面来遮挡下面的物体。这样恐高人员也可从事高空作业,扩大就业人员范围。进一步的图1对应实施例中所提到的AR眼镜可以包括用于检测头部倾斜角度的加速度传感器。AR眼镜可以根据所述头部倾斜角度和所述眼球转动信息计算所述眼镜佩戴者的可视区域;将所述可视区域中视角小于预设值的区域设置为待遮挡区域;确定所述AR眼镜镜片中所述眼镜佩戴者观察所述待遮挡区域时视线通过的目标镜片区域;在所述目标镜片区域中显示预设透明度的预设图片。
其中,上述实施方式中提到的AR眼镜可以应用于高空作业过程中,可视区域为眼镜佩戴者在当前视角下可以看见的区域,为了避免处于高空的眼镜佩戴者出现恐高现象,本实施例将可视区域中视角小于预设值的区域设置为待遮挡区域,上述视角指用户瞳孔与可视区域边界上某点连线和水平线的夹角。本实施例进一步确定所述AR眼镜镜片中所述眼镜佩戴者观察所述待遮挡区域时视线通过的目标镜片区域,并在所述目标镜片区域中显示预设透明度的预设图片,能够避免眼镜佩戴者看到下方的环境,克服恐高现象。
请参见图2,图2为本申请实施例所提供的一种AR眼镜的外观示意图,该AR眼镜可以包括:
设置于所述AR眼镜的眼镜内框的第一摄像头,用于拍摄眼镜佩戴者的眼部图片;
设置于所述AR眼镜的眼镜外框的第二摄像头,用于采集目标图像;
分别与所述第一摄像头和所述第二摄像头连接的处理器,用于根据所述眼部图片确定所述眼镜佩戴者的眼球转动信息;还用于根据所述眼球转动信息调整所述第二摄像头的拍摄角度,并获取所述第二摄像头调整拍摄角度后采集的目标图像;还用于判断所述目标图像中是否存在目标物体;若是,则通过AR眼镜镜片显示包括所述目标物体的位置信息的图像。
本实施例所应用的AR眼镜包括用于检测眼球转动信息的第一摄像头,以及采集目标图像的第二摄像头。本实施例根据眼镜佩戴者的眼球转动信息 调整第二摄像头的拍摄角度,进而在该拍摄角度下采集相应的目标图像。若目标图像中包括目标物体,则通过AR眼镜镜片显示目标物体的位置信息。本实施例能够根据眼镜佩戴者实施的眼球转动信息对第二摄像头的拍摄角度进行动态调整,拍摄得到与眼镜佩戴者观察区域相同的目标图像,以便确定用户当前观察的区域中是否存在目标物体。本实施例提供的方案能够根据用户的视角转动动态调整检测目标物体的拍摄角度,提高了AR眼镜的物体识别精度。
进一步的,该AR眼镜还包括:
分别与所述处理器和安全装置连接的通信装置,用于将安全装置传输的安全监测信息发送至所述处理器;其中,所述安全装置包括安全带检测装置、血压监测装置和心率检测装置中的任一项或任几项的组合;
相应的,所述处理器还用于通过所述AR眼镜镜片显示所述安全监测信息。
若所述目标图像中不存在所述目标物体,该AR眼镜还包括:
地图更新模块,用于将所述目标图像对应的区域设置为第一类区域,并更新区域地图;其中,所述区域地图包括第一类区域和第二类区域,所述第一类区域为不包括所述目标物体的区域,所述第二类区域为包括所述目标物体的区域;通过所述AR眼镜镜片显示更新后的区域地图。
工作进度计算模块,用于根据所述更新后的区域地图中所述第一类区域和第二类区域的区域面积比例计算工作完成度,并将所述工作完成度显示至所述AR眼镜镜片。
进一步的,还包括:
地形识别模块,用于在获取所述第二摄像头调整拍摄角度后采集的目标图像之后,对所述目标图像执行图像识别操作,根据图像识别结果确定所述目标图像对应区域的区域地形;若所述区域地形为危险地形时,则生成对应的提示信息。
进一步的,处理器还用于获取所述第二摄像头调整拍摄角度后采集的多张所述目标图像;还用于确定所有所述目标图像的公共拍摄区域;还用于按照拍摄顺序依次对相邻的所述目标图像中公共拍摄区域执行相似度比对操作;还用于根据相似度比对结果判断所述公共拍摄区域是否存在高空坠物; 若是,则生成对应的报警信息。
进一步的,所述AR眼镜包括用于检测头部倾斜角度的加速度传感器;
相应的,所述处理器还用于根据所述头部倾斜角度和所述眼球转动信息计算所述眼镜佩戴者的可视区域;用于将所述可视区域中视角小于预设值的区域设置为待遮挡区域;用于确定所述AR眼镜镜片中所述眼镜佩戴者观察所述待遮挡区域时视线通过的目标镜片区域;用于在所述目标镜片区域中显示预设透明度的预设图片。
请参见图3和图4,图3为本申请实施例所提供的一种高空作业AR眼镜的目标物体识别原理示意图,图4为本申请实施例所提供的一种高空作业AR眼镜的扫描流程提流程图。图3和图4中的目标物即目标物体。
在本实施例中AR眼镜外部放置两个外侧摄像头,当佩戴者头部转向一个方向时,外侧摄像头重点扫描转向的区域;外侧摄像头根据内侧摄像头捕捉到的人眼的动作来调整摄像头向相应的方向扫描。当外侧摄像头扫描到目标物时(例如塑料垃圾),两个外侧摄像头可对目标物精准定位,目标物定位信息会以文字或图片的形式显示在眼镜片上,并通过AR眼镜中的语音播报模块播放出来。
本实施例可以将需要扫描的区域图会显示在AR眼镜上,还可以将常用区域地图软件包可下载到AR眼镜中。可以选择其中一只眼镜镜片上显示当前扫描区域、已扫描区域和未扫描区域。当所有的工作区域都扫描完成后,AR眼镜可以将该区域目标物的数量、区域工作人员信息等数据会上传至服务器。上传至服务器的数据可用于对目标物进行分析,还可作为工作人员考勤标准。
AR眼镜的镜框内侧设置有内侧摄像头,该内侧摄像头可捕捉佩戴者眼部动作。例如,可根据佩戴者眼球转动的方向,控制外置摄像头对相应的区域进行扫描。当然还可以根据佩戴者眨眼以及眨眼来记录实际获得目标物数。
AR眼镜中还可以内置加速度传感器,在佩戴者保持头部竖直时,可以观察到眼睛能看到的地方,当佩戴者有头向下的动作,并且向下超过一定的角度时,佩戴者的AR眼镜镜片上回出现一些图片来遮挡下面的视野,防止出现头晕等恐高症状。遮挡模式可以根据需求进行选择。当使用者不需要使,可以关闭遮挡模式,当使用者有需要时可以打开遮挡模式。
本实施例所提供的AR眼镜可根据佩戴者的目的自主寻找设定目标物体,并将目标物体通过图像、语音、文字等形式显示在镜片上,提高了高空作业员(高空擦玻璃、悬崖捡垃圾)工作效率;本实施例还可以根据佩戴者的需求,实现通透模式和遮挡模式的切换,使具有特殊症状人员也可适应这种高危工作。另外,该AR眼镜还可以向服务器上报数据,有利于数据的分析。
由于装置部分的实施例与方法部分的实施例相互对应,因此装置部分的实施例请参见方法部分的实施例的描述,这里暂不赘述。
本申请还提供了一种存储介质,其上存有计算机程序,该计算机程序被执行时可以实现上述实施例所提供的步骤。该存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本说明书中各个实施例采用并列或者递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处可参见方法部分说明。
本领域普通技术人员还可以理解,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗 示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。

Claims (10)

  1. 一种图像显示方法,其特征在于,应用于AR眼镜,所述AR眼镜的眼镜内框设置有第一摄像头,所述AR眼镜的眼镜外框设置有第二摄像头,所述图像显示方法包括:
    利用所述第一摄像头检测眼镜佩戴者的眼球转动信息;
    根据所述眼球转动信息调整所述第二摄像头的拍摄角度,并获取所述第二摄像头调整拍摄角度后采集的目标图像;
    判断所述目标图像中是否存在目标物体;
    若是,则通过AR眼镜镜片显示包括所述目标物体的位置信息的图像。
  2. 根据权利要求1所述图像显示方法,其特征在于,若所述目标图像中不存在所述目标物体,还包括:
    将所述目标图像对应的区域设置为第一类区域,并更新区域地图;其中,所述区域地图包括第一类区域和第二类区域,所述第一类区域为不包括所述目标物体的区域,所述第二类区域为包括所述目标物体的区域;
    通过所述AR眼镜镜片显示更新后的区域地图。
  3. 根据权利要求2所述图像显示方法,其特征在于,在所述更新区域地图之后,还包括:
    根据所述更新后的区域地图中所述第一类区域和所述第二类区域的区域面积比例计算工作完成度,并将所述工作完成度显示至所述AR眼镜镜片。
  4. 根据权利要求1所述图像显示方法,其特征在于,还包括:
    接收安全装置传输的安全监测信息,并通过所述AR眼镜镜片显示所述安全监测信息;其中,所述安全装置包括安全带检测装置、血压监测装置和心率检测装置中的任一项或任几项的组合。
  5. 根据权利要求1所述图像显示方法,其特征在于,在获取所述第二摄像头调整拍摄角度后采集的目标图像之后,还包括:
    对所述目标图像执行图像识别操作,根据图像识别结果确定所述目标图像对应区域的区域地形;
    若所述区域地形为危险地形时,则生成对应的提示信息。
  6. 根据权利要求1所述图像显示方法,其特征在于,获取所述第二摄像头调整拍摄角度后采集的目标图像,包括:
    获取所述第二摄像头调整拍摄角度后采集的多张所述目标图像;
    相应的,还包括:
    确定所有所述目标图像的公共拍摄区域;
    按照拍摄顺序依次对相邻的所述目标图像中公共拍摄区域执行相似度比对操作;
    根据相似度比对结果判断所述公共拍摄区域是否存在高空坠物;
    若是,则生成对应的报警信息。
  7. 根据权利要求1至6任一项所述图像显示方法,其特征在于,所述AR眼镜包括用于检测头部倾斜角度的加速度传感器;
    相应的,所述图像显示方法还包括:
    根据所述头部倾斜角度和所述眼球转动信息计算所述眼镜佩戴者的可视区域;
    将所述可视区域中视角小于预设值的区域设置为待遮挡区域;
    确定所述AR眼镜镜片中所述眼镜佩戴者观察所述待遮挡区域时视线通过的目标镜片区域;
    在所述目标镜片区域中显示预设透明度的预设图片。
  8. 一种AR眼镜,其特征在于,包括:
    设置于所述AR眼镜的眼镜内框的第一摄像头,用于拍摄眼镜佩戴者的眼部图片;
    设置于所述AR眼镜的眼镜外框的第二摄像头,用于采集目标图像;
    分别与所述第一摄像头和所述第二摄像头连接的处理器,用于根据所述眼部图片确定所述眼镜佩戴者的眼球转动信息;还用于根据所述眼球转动信息调整所述第二摄像头的拍摄角度,并获取所述第二摄像头调整拍摄角度后采集的目标图像;还用于判断所述目标图像中是否存在目标物体;若是,则通过AR眼镜镜片显示包括所述目标物体的位置信息的图像。
  9. 根据权利要求8所述AR眼镜,其特征在于,还包括:
    分别与所述处理器和安全装置连接的通信装置,用于将安全装置传输的安全监测信息发送至所述处理器;其中,所述安全装置包括安全带检测装置、血压监测装置和心率检测装置中的任一项或任几项的组合;
    相应的,所述处理器还用于通过所述AR眼镜镜片显示所述安全监测信 息。
  10. 一种存储介质,其特征在于,所述存储介质中存储有计算机可执行指令,所述计算机可执行指令被处理器加载并执行时,实现如上权利要求1至7任一项所述图像显示方法的步骤。
PCT/CN2020/127358 2020-05-13 2020-11-07 一种图像显示方法、ar眼镜及存储介质 WO2021227402A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/996,459 US11835726B2 (en) 2020-05-13 2020-11-07 Image display method, AR glasses and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010403469.5A CN111552076B (zh) 2020-05-13 2020-05-13 一种图像显示方法、ar眼镜及存储介质
CN202010403469.5 2020-05-13

Publications (1)

Publication Number Publication Date
WO2021227402A1 true WO2021227402A1 (zh) 2021-11-18

Family

ID=72004634

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/127358 WO2021227402A1 (zh) 2020-05-13 2020-11-07 一种图像显示方法、ar眼镜及存储介质

Country Status (3)

Country Link
US (1) US11835726B2 (zh)
CN (1) CN111552076B (zh)
WO (1) WO2021227402A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118053088A (zh) * 2024-04-16 2024-05-17 南京大量数控科技有限公司 基于ar眼镜工业生产中排线安装顺序及不稳定的分析方法

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552076B (zh) 2020-05-13 2022-05-06 歌尔科技有限公司 一种图像显示方法、ar眼镜及存储介质
CN114077303A (zh) * 2020-08-21 2022-02-22 广州视享科技有限公司 Ar眼镜摄像头角度调整方法、装置、电子设备及介质
CN111950520B (zh) * 2020-08-27 2022-12-02 重庆紫光华山智安科技有限公司 图像识别方法、装置、电子设备及存储介质
CN114285984B (zh) * 2020-09-27 2024-04-16 宇龙计算机通信科技(深圳)有限公司 基于ar眼镜的摄像头切换方法、装置、存储介质以及终端
CN112558302B (zh) * 2020-12-08 2022-12-20 恒玄科技(上海)股份有限公司 一种用于确定眼镜姿态的智能眼镜及其信号处理方法
CN112712597A (zh) * 2020-12-21 2021-04-27 上海影创信息科技有限公司 目的地相同用户的轨迹提示方法和系统
CN113570624A (zh) * 2021-07-30 2021-10-29 平安科技(深圳)有限公司 基于智能穿戴眼镜的前景信息提示方法及相关设备
CN114327066A (zh) * 2021-12-30 2022-04-12 上海曼恒数字技术股份有限公司 虚拟现实屏幕的立体显示方法、装置、设备及存储介质
CN114815257A (zh) * 2022-04-25 2022-07-29 歌尔股份有限公司 一种xr眼镜及摄像头调整方法、系统、设备、介质
CN115097903B (zh) * 2022-05-19 2024-04-05 深圳智华科技发展有限公司 Mr眼镜控制方法、装置、mr眼镜及存储介质
CN115103094A (zh) * 2022-06-16 2022-09-23 深圳市天趣星空科技有限公司 一种基于注视点的摄像头模组远视角调节方法及系统
CN115866388B (zh) * 2022-11-24 2023-06-30 广州新城建筑设计院有限公司 智能眼镜拍摄控制方法、装置、存储介质及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150346816A1 (en) * 2014-05-30 2015-12-03 Moriahtown Co., Ltd. Display device using wearable eyeglasses and method of operating the same
CN105827960A (zh) * 2016-03-21 2016-08-03 乐视网信息技术(北京)股份有限公司 一种成像方法及装置
CN109086726A (zh) * 2018-08-10 2018-12-25 陈涛 一种基于ar智能眼镜的局部图像识别方法及系统
CN109254659A (zh) * 2018-08-30 2019-01-22 Oppo广东移动通信有限公司 穿戴式设备的控制方法、装置、存储介质及穿戴式设备
CN109683701A (zh) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 基于视线跟踪的增强现实交互方法和装置
CN111552076A (zh) * 2020-05-13 2020-08-18 歌尔科技有限公司 一种图像显示方法、ar眼镜及存储介质

Family Cites Families (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2729749Y (zh) * 2004-05-24 2005-09-28 孙学川 心理眼镜
KR100947990B1 (ko) * 2008-05-15 2010-03-18 성균관대학교산학협력단 차영상 엔트로피를 이용한 시선 추적 장치 및 그 방법
US20150309316A1 (en) * 2011-04-06 2015-10-29 Microsoft Technology Licensing, Llc Ar glasses with predictive control of external device based on event input
US9507416B2 (en) * 2011-02-22 2016-11-29 Robert Howard Kimball Providing a corrected view based on the position of a user with respect to a mobile platform
US8913789B1 (en) * 2012-01-06 2014-12-16 Google Inc. Input methods and systems for eye positioning using plural glints
CN202810048U (zh) * 2012-09-07 2013-03-20 林铭昭 一种楼梯栏杆的安全挡板
US20190272029A1 (en) * 2012-10-05 2019-09-05 Elwha Llc Correlating user reaction with at least an aspect associated with an augmentation of an augmented view
US10262462B2 (en) * 2014-04-18 2019-04-16 Magic Leap, Inc. Systems and methods for augmented and virtual reality
KR102246310B1 (ko) * 2013-12-31 2021-04-29 아이플루언스, 인크. 시선-기반 미디어 선택 및 편집을 위한 시스템들 및 방법들
US9934573B2 (en) * 2014-09-17 2018-04-03 Intel Corporation Technologies for adjusting a perspective of a captured image for display
US10567641B1 (en) * 2015-01-19 2020-02-18 Devon Rueckner Gaze-directed photography
US10890751B2 (en) * 2016-02-05 2021-01-12 Yu-Hsuan Huang Systems and applications for generating augmented reality images
TWI576787B (zh) * 2016-02-05 2017-04-01 黃宇軒 擴增實境影像產生系統及其應用
US10477157B1 (en) * 2016-03-02 2019-11-12 Meta View, Inc. Apparatuses, methods and systems for a sensor array adapted for vision computing
JP6738641B2 (ja) * 2016-04-11 2020-08-12 株式会社バンダイナムコエンターテインメント シミュレーション制御装置及びシミュレーション制御プログラム
CN105955456B (zh) * 2016-04-15 2018-09-04 深圳超多维科技有限公司 虚拟现实与增强现实融合的方法、装置及智能穿戴设备
CN205812229U (zh) * 2016-06-23 2016-12-14 青岛歌尔声学科技有限公司 一种头戴显示器、视频输出设备和视频处理系统
JP2018036993A (ja) * 2016-09-02 2018-03-08 オリンパス株式会社 表示システム、携帯情報機器、ウエラブル型端末、情報表示方法およびプログラム
JP2018082363A (ja) * 2016-11-18 2018-05-24 セイコーエプソン株式会社 頭部装着型表示装置およびその制御方法、並びにコンピュータープログラム
CN106444088A (zh) * 2016-11-18 2017-02-22 无锡新人居科贸有限公司 一种新型眼镜
CN106933348A (zh) * 2017-01-24 2017-07-07 武汉黑金科技有限公司 一种基于虚拟现实的脑电神经反馈干预系统及方法
CN107092346B (zh) * 2017-03-01 2021-12-24 联想(北京)有限公司 一种信息处理方法及电子设备
CN206711427U (zh) * 2017-03-13 2017-12-05 福州众衡时代信息科技有限公司 一种基于虚拟现实技术的恐高训练装置
GB201709199D0 (en) * 2017-06-09 2017-07-26 Delamont Dean Lindsay IR mixed reality and augmented reality gaming system
WO2018235088A1 (en) * 2017-06-21 2018-12-27 Luxembourg Itzhak MAGNIFICATION LENSES WITH MULTIPLE CAMERAS
CN111133364B (zh) * 2017-07-13 2021-11-09 华为技术有限公司 双模式耳机
KR20190015907A (ko) * 2017-08-07 2019-02-15 주식회사 엠투에스 고소공포증 치료를 위한 가상 현실 시스템
JP2019031362A (ja) * 2017-08-07 2019-02-28 株式会社日立ビルシステム 展望用エレベーター制御システム
CN207397225U (zh) * 2017-10-09 2018-05-22 中山大学中山眼科中心 捕捉眼部信息的组件以及控制系统
CN109960061A (zh) * 2017-12-22 2019-07-02 托普瑞德(无锡)设计顾问有限公司 一种便于拆装和调节的带有摄像头的眼镜
KR20190089627A (ko) * 2018-01-23 2019-07-31 삼성전자주식회사 Ar 서비스를 제공하는 디바이스 및 그 동작 방법
US10825251B2 (en) * 2018-02-08 2020-11-03 Varian Medical Systems International Ag Systems and methods for providing medical information and for performing a medically-related process using augmented reality technology
WO2019164514A1 (en) * 2018-02-23 2019-08-29 Google Llc Transitioning between map view and augmented reality view
US10586394B2 (en) * 2018-03-01 2020-03-10 Intel Corporation Augmented reality depth sensing using dual camera receiver
CN208345545U (zh) * 2018-05-22 2019-01-08 上海百通项目管理咨询有限公司 一种建筑升降装置
EP3572912A1 (en) * 2018-05-23 2019-11-27 Universitat de Barcelona Methods and systems for gradual exposure to a fear
CN108829250A (zh) * 2018-06-04 2018-11-16 苏州市职业大学 一种基于增强现实ar的对象互动展示方法
CN108732764B (zh) * 2018-06-06 2024-05-31 北京七鑫易维信息技术有限公司 一种智能眼镜、眼球轨迹的追踪方法、装置及存储介质
US11151793B2 (en) * 2018-06-26 2021-10-19 Magic Leap, Inc. Waypoint creation in map detection
CN109044373B (zh) * 2018-07-12 2022-04-05 济南博图信息技术有限公司 基于虚拟现实和眼动脑波检测的恐高症测评系统
CN209148966U (zh) * 2018-11-30 2019-07-23 歌尔科技有限公司 一种头戴设备
US10930275B2 (en) * 2018-12-18 2021-02-23 Microsoft Technology Licensing, Llc Natural language input disambiguation for spatialized regions
US11320957B2 (en) * 2019-01-11 2022-05-03 Microsoft Technology Licensing, Llc Near interaction mode for far virtual object
CN109829927B (zh) * 2019-01-31 2020-09-01 深圳职业技术学院 一种电子眼镜及高空景象图像重建方法
CN110007466A (zh) * 2019-04-30 2019-07-12 歌尔科技有限公司 一种ar眼镜及人机交互方法、系统、设备、计算机介质
US11698529B2 (en) * 2019-07-09 2023-07-11 Meta Platforms Technologies, Llc Systems and methods for distributing a neural network across multiple computing devices
CN110647822A (zh) * 2019-08-30 2020-01-03 重庆博拉智略科技有限公司 高空抛物行为识别方法、装置、存储介质及电子设备
EP3901687A4 (en) * 2019-09-10 2022-07-13 Lg Electronics Inc. ELECTRONIC DEVICE
CN110824699B (zh) * 2019-12-25 2020-12-04 歌尔光学科技有限公司 一种近眼显示设备的眼球追踪系统及近眼显示设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150346816A1 (en) * 2014-05-30 2015-12-03 Moriahtown Co., Ltd. Display device using wearable eyeglasses and method of operating the same
CN105827960A (zh) * 2016-03-21 2016-08-03 乐视网信息技术(北京)股份有限公司 一种成像方法及装置
CN109683701A (zh) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 基于视线跟踪的增强现实交互方法和装置
CN109086726A (zh) * 2018-08-10 2018-12-25 陈涛 一种基于ar智能眼镜的局部图像识别方法及系统
CN109254659A (zh) * 2018-08-30 2019-01-22 Oppo广东移动通信有限公司 穿戴式设备的控制方法、装置、存储介质及穿戴式设备
CN111552076A (zh) * 2020-05-13 2020-08-18 歌尔科技有限公司 一种图像显示方法、ar眼镜及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118053088A (zh) * 2024-04-16 2024-05-17 南京大量数控科技有限公司 基于ar眼镜工业生产中排线安装顺序及不稳定的分析方法

Also Published As

Publication number Publication date
US20230213773A1 (en) 2023-07-06
CN111552076B (zh) 2022-05-06
CN111552076A (zh) 2020-08-18
US11835726B2 (en) 2023-12-05

Similar Documents

Publication Publication Date Title
WO2021227402A1 (zh) 一种图像显示方法、ar眼镜及存储介质
US10182720B2 (en) System and method for interacting with and analyzing media on a display using eye gaze tracking
US7221863B2 (en) Image processing apparatus and method, and program and recording medium used therewith
JP5730995B2 (ja) 基本姿勢を推定するための方法
US10168778B2 (en) User status indicator of an augmented reality system
WO2020020022A1 (zh) 视觉识别方法及其系统
CN107368192B (zh) Vr眼镜的实景观测方法及vr眼镜
CN103475893A (zh) 三维显示中对象的拾取装置及三维显示中对象的拾取方法
US9928421B2 (en) Method for helping determine the vision parameters of a subject
CN103501406A (zh) 图像采集系统及图像采集方法
JP5289962B2 (ja) 視力矯正支援システム
WO2021035575A1 (zh) 支付过程中防止偷窥的方法及电子设备
JP5834941B2 (ja) 注目対象特定装置、注目対象特定方法、及びプログラム
CN108537103B (zh) 基于瞳孔轴测量的活体人脸检测方法及其设备
CN109799899B (zh) 交互控制方法、装置、存储介质和计算机设备
CN109828663A (zh) 瞄准区域的确定方法及装置、瞄准目标物的操作方法
US20190281280A1 (en) Parallax Display using Head-Tracking and Light-Field Display
KR101147873B1 (ko) 얼굴 인식을 이용한 스크린 골프 시스템 및 그 제어 방법
JP2017191546A (ja) 医療用ヘッドマウントディスプレイ、医療用ヘッドマウントディスプレイのプログラムおよび医療用ヘッドマウントディスプレイの制御方法
CN108107580A (zh) 一种虚拟现实场景呈现展示方法及系统
CN113552947A (zh) 虚拟场景的显示方法、装置和计算机可读存储介质
CN113633257B (zh) 基于虚拟现实的前庭功能检查方法、系统、设备及介质
US10783853B2 (en) Image provision device, method and program that adjusts eye settings based on user orientation
JP7053833B2 (ja) 被検者の参照頭位を特定する方法及び装置
CN113197542A (zh) 一种在线自助视力检测系统、移动终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20935242

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20935242

Country of ref document: EP

Kind code of ref document: A1