WO2017206042A1 - 基于智能眼镜的遮挡物透视方法及装置 - Google Patents

基于智能眼镜的遮挡物透视方法及装置 Download PDF

Info

Publication number
WO2017206042A1
WO2017206042A1 PCT/CN2016/084002 CN2016084002W WO2017206042A1 WO 2017206042 A1 WO2017206042 A1 WO 2017206042A1 CN 2016084002 W CN2016084002 W CN 2016084002W WO 2017206042 A1 WO2017206042 A1 WO 2017206042A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
occlusion
smart glasses
user
view
Prior art date
Application number
PCT/CN2016/084002
Other languages
English (en)
French (fr)
Inventor
付楠
谢耀钦
朱艳春
余绍德
陈昳丽
张志诚
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to PCT/CN2016/084002 priority Critical patent/WO2017206042A1/zh
Publication of WO2017206042A1 publication Critical patent/WO2017206042A1/zh
Priority to US16/008,815 priority patent/US10607414B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to the field of human-computer interaction technology, and in particular, to a method and a device for obscuring an object based on smart glasses.
  • the invention provides a method and a device for occluding a visor based on smart glasses, so as to solve the problem that the existing smart glasses cannot see through the occlusion.
  • the present invention provides a spectacles fluoroscopy method based on smart glasses, comprising: collecting a first image of a user's field of view through the smart glasses and identifying an occlusion image in the first image; and collecting a user's field of view through at least one image capturing device a second image; replacing the occlusion image in the first image with a portion of the second image corresponding to the occlusion object image, and splicing generates an occlusion image of a user's field of view.
  • the present invention also provides a spectacles fluoroscopy device based on smart glasses, comprising: an occlusion image acquisition unit, configured to collect a first image of a user's field of view through the smart glasses and identify an occlusion image in the first image; An image acquiring unit, configured to acquire a second image of a user's field of view by using at least one image capturing device; and an occlusion image generating unit, configured to replace the first image by using a portion of the second image corresponding to the occlusion object image
  • the occlusion image in the splicing creates an unobstructed image of the user's field of view.
  • the method and device for obscuring the object based on the smart glasses in the embodiment of the present invention using the image of the user's field of view collected by the camera of the smart glasses, analyzing and identifying the position and image of the obstruction in the field of view through various methods, and using the external device
  • the image acquisition device collects the image of the occluded field of view, and replaces the occluded portion with the image captured by the external detector, and then registers and splicing the replacement image with the user's field of view image, thereby
  • the smart glasses can see the image blocked by the obstruction when observing the obstruction, so that the effect of seeing the obstruction can be generated, thereby effectively removing the dead angle caused by the obstruction in the user's field of view.
  • FIG. 1 is a schematic flow chart of a spectacles fluoroscopy method based on smart glasses according to an embodiment of the present invention
  • FIG. 2 is a schematic flow chart of a method for stitching and generating an unoccluded image of a user's field of view according to an embodiment of the present invention
  • FIG. 3 is a schematic flow chart of a method for stitching an unoccluded image of a user's field of view in another embodiment of the present invention
  • FIG. 4 is a schematic flow chart of a spectacles fluoroscopy method based on smart glasses according to still another embodiment of the present invention.
  • FIG. 5 is a schematic flow chart of a spectacles fluoroscopy method based on smart glasses according to still another embodiment of the present invention.
  • FIG. 6 is a schematic flow chart of a spectacles fluoroscopy method based on smart glasses according to another embodiment of the present invention.
  • FIG. 7 is a schematic flow chart of a method for collecting a second image of a user's field of view by using multiple image acquisition devices according to an embodiment of the invention
  • FIG. 8 is a schematic flow chart of a method for stitching a second image according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a first image acquired by smart glasses according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a second image collected by an image acquisition device and corresponding to the image portion of the occlusion object shown in FIG. 9 according to an embodiment of the present invention.
  • Figure 11 is a schematic view of an unobstructed image obtained by splicing the images shown in Figures 9 and 10;
  • FIG. 12 is a schematic structural diagram of a spectacles fluoroscopy apparatus based on smart glasses according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of an unoccluded image generating unit according to an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of an unoccluded image generating unit according to an embodiment of the present invention.
  • FIG. 15 is a schematic structural view of a spectacles fluoroscopy apparatus based on smart glasses according to another embodiment of the present invention.
  • FIG. 16 is a schematic structural view of a spectacles fluoroscopy apparatus based on smart glasses according to another embodiment of the present invention.
  • FIG. 17 is a schematic structural view of a spectacles fluoroscopy device based on smart glasses according to still another embodiment of the present invention.
  • FIG. 18 is a schematic structural diagram of an alternative image acquisition unit according to an embodiment of the present invention.
  • the smart glasses-based occlusion perspective method of the present invention in the state of wearing the smart glasses, creatively utilizes an auxiliary detector, such as an external camera, and uses the auxiliary detector to acquire an image of the occluded portion, or the detector It can cover the blind angle of the visual field caused by the obstruction of the obstruction, and then splicing the image captured by the occlusion portion and the detector to obtain an image of the dead angle of the visual field.
  • an auxiliary detector such as an external camera
  • FIG. 1 is a schematic flow chart of a spectacles fluoroscopy method based on smart glasses according to an embodiment of the present invention.
  • the smart glasses-based occlusion perspective method of the embodiment of the present invention may include the following steps:
  • S110 collect a first image of a user's field of view through the smart glasses and identify an obstruction image in the first image;
  • S120 Collect a second image of the user's field of view by using at least one image capturing device;
  • S130 Replace the occlusion object image in the first image by using a portion corresponding to the occlusion object image in the second image, and splicing to generate an occlusion image of the user's field of view.
  • the smart glasses may be various different smart glasses, and an image collecting device such as a camera is generally disposed thereon.
  • the above-mentioned user generally refers to a user wearing the above-mentioned smart glasses. Since the position of the image capturing device of the smart glasses is very close to the position of the human eye, the image that the user can see can also be seen by the smart glasses.
  • the obstruction can cause a blind spot in the user's field of view, and some users blocked by the obstruction cannot observe it, for example, the column of the window will block part of the field of view of the person inside the car.
  • the difference between the image of the user's field of view, that is, the natural field of view and the image capturing device of the smart glasses may be corrected or corrected based on the positional relationship between the human eye and the image capturing device of the smart glasses, thereby improving The authenticity of the image on the smart glasses display can enhance the user experience.
  • the difference in the image seen by both the user's field of view can be corrected using a dual camera for camera calibration.
  • the position of the human eye can be placed on a camera, and the image of the calibration plate collected by the camera and the image of the calibration plate collected by the smart glasses camera can be For comparison, a transformation matrix T of an image is obtained. All the images collected by the smart glasses are transformed by the matrix T before being displayed to the human eye, so that an image can be acquired and displayed at the position of the human eye.
  • the first image can be acquired by an image acquisition device on the smart glasses.
  • the occlusion image in the first image can be obtained by image analysis in a processor or an additionally provided processor in the smart glasses, that is, whether there is an occlusion in the user's field of view.
  • the obscuration captured by the camera of the smart glasses can be recognized to determine which part of the image is used for replacement, and the process of using the smart glasses can also be assisted by the gyroscope device of the smart glasses itself.
  • the camera of the smart glasses can recognize whether there is an obstruction in the field of view and replace it with the corresponding image.
  • the image around the occlusion and the occluded image can be effectively spliced, so that the position of the field of view can be better judged and replaced with the corresponding image.
  • the smart glasses may be see-through smart glasses.
  • the display of the see-through smart glasses allows natural light to pass through, so that the user can view the images displayed by the smart glasses while also seeing the natural real field of view.
  • the image generated in the smart glasses may be superimposed with the target image in the real field of view, and the image processed in the smart glasses may be used to cover the image of the obstruction in a part of the real field of view, thereby achieving the effect of the see-through object.
  • the occluded image can be acquired by one or more image capture devices, wherein at least one of the image capture devices can be unobstructed by the occlusion.
  • the image capturing device may be various devices capable of acquiring images, such as a camera, a camera, an infrared image detector, and the like.
  • the above image capturing device can be installed in various positions as long as it can capture an image of the user's field of view blocked by the obstructed object, in other words, it needs to cover the position of the field of view blocked by the obstructed object, that is, the blind spot of the user's field of view is covered.
  • the image of the occluded portion can be collected.
  • the image capturing device is disposed at the edge of the roof of the roof, and can cover the dead angle of the window of the window pillar.
  • the image capturing device is disposed on the back side of the obstructing object, so that an image of the user's field of view blocked by the obstructing object can be surely collected.
  • the second image can be acquired using only one image acquisition device.
  • a plurality of image acquisition devices are simultaneously used to simultaneously capture images of different portions of the field of view blocked by the obstruction, and images acquired by different image acquisition devices may be stitched together to generate a field of view including all or the entire obstruction. image.
  • the position of the image capturing device can be relatively fixed with the position of the occlusion object, thereby greatly reducing the amount of calculation of the image processing, and improving the image processing effect.
  • the portion of the second image corresponding to the obstruction image may refer to an image of the user's field of view blocked by the obstruction.
  • the portion of the second image corresponding to the image of the obstruction may be an image of the traffic light.
  • the unobstructed image shows the image behind the occlusion.
  • the image captured by the image capture device described above may be processed by the processor in a variety of manners, such as by wireless or wired means, the processor may be a processor on the smart glasses, or an additional processor.
  • the image acquisition device can collect images of the user's field of view blocked by the occlusion object in real time, and can update the unoccluded image displayed on the smart glasses display in real time.
  • an image of a user's field of view covered by the occluded object is collected by the image capturing device, and the image blocked by the occluded object may not be known in advance.
  • the entire object of the occlusion object can be realized. In this way, it can overcome the shortcomings of the existing smart glasses that can only be used to see the internal structure of the object, but not the whole object, and can also break through the limitations of the existing smart glasses that need to know the internal structure image of the object in advance.
  • a method of replacing an occlusion object in the first image by using a portion corresponding to the occlusion object image in the second image, and splicing an unoccluded image of a user's field of view may include step:
  • S131 extract a plurality of first feature points and a plurality of second feature points from the image portion except the occlusion image and the second image in the first image, wherein the second feature point and the first feature point are One correspondence
  • S132 Calculate, according to all the foregoing first feature points and all the foregoing second feature points, an image transformation matrix converted from an imaging perspective of the image capturing device to a user perspective;
  • S133 Perform image transformation on the second image by using the image transformation matrix.
  • S134 splicing the portion corresponding to the occlusion object in the image-converted second image and the first image after removing the occlusion object image to generate the occlusion image.
  • the first feature point and the second feature point may be points in the corresponding image that can reflect the shape of the object, such as an apex or an inflection point of the object in the image, which can be obtained by a gradient change of the intensity of the point. Specifically, which points in the image are used as feature points may be determined as needed.
  • the second feature point is in one-to-one correspondence with the first feature point, and may mean that the second feature point and the first feature point are points on an image corresponding to the same point screen in the user's field of view. The more the number of feature points, the more accurate the image stitching.
  • the pair of feature points (the first feature point and the second feature point appear in pairs) are at least 5 pairs, which is beneficial to the image transformation matrix result. Correct and stable.
  • all first feature points in the image portion other than the occlusion image in the first image may be extracted, all second feature points in the second image may be extracted, and feature vectors of the corresponding feature points may be recorded.
  • the feature value further, all the first feature points and all the second feature points are matched according to the feature vector and the feature value, and the first feature point and the second feature point that are successfully matched are used to calculate the image transformation.
  • the matrix that is, the number of first feature points and second feature points initially extracted may be different, for example, the number of second feature points is greater than the number of first feature points.
  • the second image is captured from an imaging angle of view of the image capturing device, and the first image is taken from a user's perspective.
  • the second image and the first image are The common image portion in the middle will generally be relatively deformed.
  • the second image is image-converted by using the image transformation matrix, and the second image can be converted into an image angle of view by the imaging angle of the image capturing device, and the image deformation problem can be eliminated.
  • the portion corresponding to the occlusion image in the image-converted second image may be obtained by comparing the first image and the second image after the image conversion; and removing the occlusion in the first image. After the image is formed, the portion of the image after the image transformation corresponding to the occlusion image is spliced together with the first image after the occlusion image is removed.
  • Feature extraction algorithms For example, multiple blurring of the image is performed, and each time the blurred image is compared with the blurred front image to obtain a vector matrix. This can be stacked to form a 3*3*3 lookup in a cube. A value that is larger or smaller than the center point is a feature point. At the same time, the feature vector of the feature point and the gradient of the feature point in each direction are recorded as the feature value of the feature point. If both the feature values and the feature vectors of the two feature points are the same, it can be considered that the two feature points are the same point, that is, correspond to each other.
  • This transformation matrix can be a 3*3 matrix. This 3*3 matrix can be obtained by finding the minimum variance from the corresponding solution of the feature points.
  • the second image is image-converted by the image transformation matrix, and an occlusion image is generated by splicing, thereby preventing an angle of view between the imaging angle of the image capturing device and the user's field of view (the smart glasses imaging angle of view).
  • the imaging of the occluded portion is not intuitive enough, causing confusion of the user's field of view, thereby affecting the user's normal field of view.
  • a method of extracting a plurality of first feature points and a plurality of second feature points from the image portion other than the occlusion image in the first image and the second image, respectively may be that a plurality of first feature points and a plurality of second feature points are respectively extracted from the image portion other than the occlusion image in the first image and the second image by the feature extraction algorithm.
  • the above feature extraction algorithm can extract the inflection point, and the inflection point can refer to a point that has a large difference from other surrounding pixel points. For example, if there is a vertical edge on the image, it is obvious that the left and right sides of the edge have a larger change from the pixel of the line where the edge is located, but this edge cannot be the feature point, that is, the inflection point, because the edge is a line, a line There are too many points on it, and it is impossible to accurately determine which of them can be used as an inflection point. For another example, if the two edges intersect, there must be an intersection, and the intersection is more different in any direction than the pixels around it.
  • the feature point can generally be the position of the sharp corner of the object in the image.
  • the above feature points include the first feature point and the second feature point described above.
  • the set pixel width region of the edge of the first image after the image of the obstruction is removed includes a portion of the obstruction image.
  • the image portion of the occlusion object is removed, and the occluded edge portion is retained, which can be used as a basis for image splicing to improve the image splicing quality.
  • the occlusion portion is removed, the unobstructed portion can be retained.
  • part of the image of the unoccluded portion and the occlusion portion can be processed at the time of display to make the display look more natural.
  • the value of the reserved edge portion may use a fixed pixel, or the width may be determined according to the difference between the spliced pixels. In one embodiment, the width of the edge portion is not less than 5 pixels, so that the smoothness of the image display can be facilitated.
  • the edge portion may have a width of 0 to 5 pixels, or the edge of the obstruction may be expanded to make the removed obstruction image larger than the actual obstruction image. Make the stitched image more natural and smoother.
  • FIG. 3 is a schematic flow chart of a method for stitching an unoccluded image of a user's field of view in another embodiment of the present invention.
  • S135 Initializing, according to the imaging angle and the spatial position information of the image capturing device, the splicing position between the portion of the second image corresponding to the occlusion object and the first image after removing the occlusion image Positioning.
  • the speed of image stitching can be accelerated.
  • the positions of the image capturing device and the obstructing object are preferably relatively fixed, and the occluded image may have a certain relative relationship with the image collected by the image capturing device.
  • the spatial position information may be a relative spatial position of the image capturing device and the obstructing object.
  • the information of the relative spatial position may be recorded, and under such conditions, the obstructing object and the image capturing device Can be considered as not happening Relatively moving, so you can locate the position information of the occlusion and use the position information to splicing the image.
  • the information collected by the image acquisition device in this angular direction can be used to replace Occlusion image.
  • the range of the image that needs to search for the corresponding feature points (the first feature point and the corresponding second feature point) in the stitching process can be greatly reduced, thereby improving the image. Processing speed.
  • the gyroscope of the smart glasses itself can be used to provide information such as direction, and the angular relationship between the obstruction and the smart glasses user can be known in advance, that is, whether the user observes in the direction of the obstruction, which is Can be used as an auxiliary basis for displaying perspective images or splicing budgets.
  • the relative positions between the plurality of image capturing devices can be fixed, so that after the image stitching of the plurality of image capturing devices is performed, only the image transform needs to be retained.
  • the matrix can be applied to a new image, and there is no need to perform feature extraction and search for corresponding feature points every time, thereby improving image processing speed.
  • the method for recognizing the occlusion object image in the first image by using the smart glasses may be: identifying the above according to a constant attribute of the occlusion object or a graphic mark on the occlusion object. An occlusion image in an image.
  • the occlusion image may be identified based on the boundary region where the graphic mark is located.
  • the first image can extract the boundary of the image during image processing, so that the entire first image can be divided into many regions, and the region where the constant attribute or the graphic mark is located can be identified as an occlusion, and the image of the region can be replaced.
  • the graphic mark may be a unique mark set on the pre-reclosure to identify the cover.
  • the method of identifying the occlusion image in the first image may include the color, pattern, relative position or feature identification of the occlusion, and the like.
  • the color may refer to the relatively constant color of the obstruction, and the color of the obstruction may be limited to a certain range by acquisition for a period of time, so that the object of the color may be given a certain weight in the determination as an obstruction.
  • the pattern and the feature identifier are essentially the same, but may be identified as conforming to a specific pattern feature, and the contour may be identified as an obstruction after recognition.
  • the relative position may be recorded in a gyroscope manner to record the relative relationship between the obstruction and the user for application and recognition of the obstruction.
  • the method of identifying the occlusion can be an algorithm for object recognition.
  • the obstruction is recognized at an angle.
  • the dead angle of the field of view and the obstruction itself are generally not changed.
  • the recognition of the obstruction can be achieved, and the accuracy of the obstruction identification can be improved.
  • the color of the obstruction is relatively fixed and hardly changes.
  • no obstructions are found in the field of view, and the image taken by the smart glasses camera can be used without performing an image replacement operation.
  • the image taken by the smart glasses camera can be used without performing an image replacement operation.
  • the user can directly observe the lens through the non-display method without displaying it.
  • the user can directly see the natural field of vision through the smart glasses.
  • the smart glasses can be transparent. Smart glasses, such as smart glasses like googleglass and hololens.
  • FIG. 4 is a schematic flow chart of a spectacles fluoroscopy method based on smart glasses according to still another embodiment of the present invention.
  • the smart glasses-based occlusion perspective method shown in FIG. 1 may further include the steps of:
  • the unoccluded image when the occluded natural field of view changes, for example, the user's head is displaced, the unoccluded image can be updated according to the gyro data of the smart glasses itself. Specifically, the first image and the second image may be updated, and based on the updated first image and the updated second image, the unoccluded image is regenerated by using the above steps S120 to S130. In this way, real-time updating of the unoccluded image can be achieved when the picture in the user's field of view changes.
  • the image capture device can acquire a second image in real time, and can use the second image pair that is re-acquired to update the previous second image when needed.
  • the method for determining the obstruction in the field of view may be, firstly, using an image acquisition device to relatively completely splicing the image to form a complete image, using the image as a template, referring to the image captured by the smart glasses and the gyroscope of the smart glasses themselves. For information, a part of the image area on the template is selected as a field of view window, and the field of view window collected by the image capturing device and the image captured by the smart glasses itself are compared to determine the obstruction. The image captured by the image acquisition device is used to replace the occlusion image.
  • FIG. 5 is a schematic flow chart of a spectacles fluoroscopy method based on smart glasses according to still another embodiment of the present invention. As shown in FIG. 5, the smart glasses-based occlusion perspective method shown in FIG. 1 may further include the steps of:
  • the smart glasses determine whether the first image needs to be updated according to the detection data of the gyroscope on the smart glasses and the detection data of the gyroscope at a set position;
  • the set position may be various positions outside the smart glasses, such as on a car.
  • the change of the viewing angle of the user can be determined in various situations, for example, whether the user's viewing angle changes during the running of the vehicle.
  • the first image may be updated by the smart glasses. And generating an unoccluded image based on the updated first image, so that the unoccluded image can be updated in real time when the user's viewing angle changes.
  • the first image may not be updated.
  • the image capturing device may be a camera.
  • the second image may be acquired in real time, and the second image may be updated in real time.
  • the second image may be re-acquired and may be used to generate the above. Unobstructed image.
  • the second image may be re-acquired or may not be re-acquired, as long as the second image may still include the user's field of view blocked by the obstruction in the updated first image.
  • FIG. 6 is a schematic flow chart of a spectacles fluoroscopy method based on smart glasses according to another embodiment of the present invention. As shown in FIG. 6, the smart glasses-based occlusion perspective method shown in FIG. 1 may further include the steps of:
  • the another image capturing device is an image capturing device for collecting the second image before the change of the viewing angle of the user, and may be the other of the plurality of image capturing devices.
  • the other image capture device can capture an image of the user's field of view blocked by the obstruction within the user's new perspective.
  • the other image acquisition device and the image acquisition device described above may differ in imaging angle and/or position. Depending on the user's perspective, using a different image acquisition device to capture the second image can ensure that the user can see through the obstruction in more perspectives.
  • the smart glasses-based occlusion perspective method shown in FIG. 1 has a difference between the directional angle of the occluded object relative to the user and the imaging viewing angle of the image capturing device is less than 90 degrees.
  • the difference between the direction angle of the obstructing object relative to the user and the imaging viewing angle of the image capturing device is less than 90 degrees, and the difference between the actual space of the three-dimensional image and the image expressing the two-dimensional information can be overcome, and the perspective is improved. The more accurate the result.
  • the difference between the direction angle of the obstructing object relative to the user and the imaging viewing angle of the image capturing device may be other angles, for example, may be determined according to an angle of view that the image capturing device can collect, for example, It is 1/2 of the lens angle of view of the image acquisition device.
  • the viewing angle difference can be 30 degrees or less, so that a relatively ideal image can be obtained.
  • the image capturing device can adopt a 180-degree wide-angle lens, and the angle of view can be close to 90 degrees, and the image captured by the plurality of lenses can be stitched to obtain an image of the panoramic lens.
  • FIG. 7 is a schematic flow chart of a method for collecting a second image of a user's field of view through a plurality of image acquisition devices according to an embodiment of the invention.
  • the number of the image capturing devices is multiple.
  • the method for collecting the second image of the user's field of view by using multiple image capturing devices may include the following steps:
  • S121 Collect a third image of the user's field of view by using each of the image capturing devices.
  • the angle of view and/or the position of each of the image capturing devices may be the same, and different third images may be collected.
  • the image collecting device may collect the portion of the image capturing device when the image capturing device cannot capture the view image blocked by the complete occlusion object.
  • the occlusion images blocked by the occluders are spliced together to form a complete occlusion occlusion view image for replacing the occlusion image in the first image.
  • the second image formed by the splicing may have a wider field of view image than the third image, and when the screen in the user's field of view changes and/or the user's viewing angle changes, the foregoing
  • the image can still include an image of the field of view blocked by the occlusion object, and the second image may not be updated. If the first image changes, the image corresponding to the updated first image may be directly found in the second image. In order to improve the image processing speed.
  • FIG. 8 is a schematic flow chart of a method for stitching a second image according to an embodiment of the present invention.
  • the method of stitching each of the third images together to form the second image of the unobstructed object may include the following steps:
  • S1221 extract a plurality of third feature points from image portions other than the occlusion image in each of the third images, and different third feature points in the third image are in one-to-one correspondence;
  • S1222 Calculate, according to different third feature points in the third image, an image conversion matrix converted from an imaging angle of view of another image capturing device to an imaging angle of view of one of the image capturing devices;
  • S1223 performing image transformation on the third image collected by the other image collection devices by using the image transformation matrix.
  • S1224 splicing the image-converted third image with the third image collected by one of the image capturing devices to obtain an image corresponding to the entire occlusion image in the first image.
  • the specific implementation method may be similar to the splicing method shown in FIG. 2.
  • the third feature point may be extracted by using a feature point extraction algorithm.
  • the occlusion image may be replaced by a portion of the corresponding occlusion image in each image to be spliced, and the image corresponding to the entire occlusion image in the first image may be clipped and spliced.
  • the see-through occlusion method can include the steps of:
  • the spliced image is used as a template, and the image captured by the smart glasses and the smart glasses gyro information data are searched as a window on the template, and the corresponding image blocked by the occluded object is replaced.
  • the visual field dead angle and the obscured portion of the occlusion object may be changed in real time, and the data may be real-time;
  • the perspective result of the fluoroscopy smart glasses and the perspective method thereof is the internal structure of the object, the interior
  • the structure can be relatively fixed, without real-time, and basically does not change over time.
  • the invention can be applied to a place where traffic and military vision dead angles are greatly harmed, and at the same time, since the object that is blocked by the real-time perspective can be ensured, the result display is accurate and usable.
  • FIG. 9 is a schematic diagram of a first image acquired by smart glasses in accordance with an embodiment of the present invention.
  • the image seen by the smart glasses can be approximated as an image seen in the user's natural field of view.
  • the first image 300 acquired by the smart glasses includes an obstruction image 301 and a non-occlusion image 302.
  • the blind spot of the user's field of view (corresponding to the obstruction image 301) is to the right of the field of view.
  • the blind zone may also be on the left, above, below, and periphery of the field of view. Surround or center.
  • the first feature points A1, B1, C1, D1, E1, and F1 may be selected, and the feature points are inflection points or intersection points.
  • FIG. 10 is a schematic diagram of a second image collected by an image acquisition device including an image portion corresponding to the occlusion object shown in FIG. 9 according to an embodiment of the present invention.
  • the acquired second image 400 includes both an image corresponding to the obstruction image 301 of FIG. 9 and an image that at least partially corresponds to the non-occluded image 302.
  • the second feature points A2, B2, C2, D2, E2, F2 corresponding to the first feature points A1, B1, C1, D1, E1, F1 can be found in the second image 400.
  • the five-pointed star image in the second image 400 is deformed with respect to the five-pointed star image shown in FIG. 9, and the second feature point C2 corresponding to the first feature point C1 is occluded.
  • the occlusion image in the second image 400 does not include a portion corresponding to the occlusion image 301 in the first image 300.
  • the first feature points A1, B1, C1, D1, E1, F1 in FIG. 9 and the second feature points A2, B2, C2, D2, E2, and F2 in FIG. 10 do not necessarily overlap well.
  • the first feature points A1, B1, C1, D1, E1, and F1 and the second feature points A2, B2, C2, D2, E2, and F2 can be well overlapped to achieve image stitching. So that the stitched image conforms to the natural field of view of the human eye.
  • multiple image acquisition devices may be used for image acquisition to ensure complete coverage of the occluded field of view and the natural field of view. Images obtained by different image acquisition devices may be connected together by image stitching to obtain the second image 400.
  • FIG. 11 is a schematic illustration of an unobstructed image stitched from the images shown in Figures 9 and 10.
  • an occlusion image 500 can be obtained, wherein the replacement partial image 501 is dotted. See it.
  • the images are stitched together to form a complete image, or the image blocked by the occluded object can be displayed using a virtual image, and the dead angle of the field of view can be removed by this means.
  • the image of the occluded part collected by the image acquisition device is spliced onto the image of the existing field of view to complete the fluoroscopy of the occlusion object.
  • the spectacles illuminating method based on the smart glasses in the embodiment of the present invention uses the image of the user's field of view collected by the camera of the smart glasses to analyze and identify the position and image of the occlusion in the field of view through various methods, and utilize the external image.
  • the collecting device collects the image of the occluded field of view, and replaces the occluded portion with the image collected by the external detector, and then registers and splices the replaced image with the user's field of view image, thereby using the smart glasses.
  • the image blocked by the obstruction can be seen when observing the obstruction, so that the effect of the see-through obstruction can be generated, thereby effectively removing the dead angle caused by the obstruction in the user's field of view.
  • the embodiment of the present application further provides a spectacles fluoroscopy device based on smart glasses, as described in the following embodiments. Due to the intelligent eye The principle of the mirror occlusion device to solve the problem is similar to the spectacles fluoroscopy method based on the smart glasses. Therefore, the implementation of the spectacles fluoroscopy device based on the smart glasses can be referred to the implementation of the spectacles fluoroscopy method based on the smart glasses, and the repetition is not Let me repeat.
  • FIG. 12 is a schematic structural view of a spectacles fluoroscopy apparatus based on smart glasses according to an embodiment of the present invention.
  • the smart glasses-based occlusion device fluoroscopy device may include: an occlusion image acquisition unit 210, a replacement image acquisition unit 220, and an occlusion image generation unit 230, wherein the units are sequentially connected.
  • the occlusion image acquisition unit 210 is configured to collect a first image of the user's field of view through the smart glasses and identify the occlusion image in the first image.
  • the replacement image acquisition unit 220 is configured to acquire a second image of the user's field of view by using at least one image acquisition device.
  • the occlusion image generating unit 230 is configured to replace the occlusion object image in the first image by using a portion corresponding to the occlusion object image in the second image, and splicing to generate an occlusion image of the user's field of view.
  • the smart glasses may be a variety of different smart glasses, generally provided with an image capture device, such as a camera.
  • the above-mentioned user generally refers to a user wearing the above-mentioned smart glasses. Since the position of the image capturing device of the smart glasses is very close to the position of the human eye, the image that the user can see can also be seen by the smart glasses.
  • the obstruction can cause a blind spot in the user's field of view, and some users blocked by the obstruction cannot observe it, for example, the column of the window will block part of the field of view of the person inside the car.
  • the difference between the image of the user's field of view, that is, the natural field of view and the image capturing device of the smart glasses may be corrected or corrected based on the positional relationship between the human eye and the image capturing device of the smart glasses, thereby improving The authenticity of the image on the smart glasses display can enhance the user experience.
  • the difference in the image seen by both the user's field of view can be corrected using a dual camera for camera calibration.
  • the camera position can be placed on a camera, and the calibration plate image collected by the camera is compared with the calibration plate image collected by the smart glasses camera to obtain a transformation matrix T of an image. All the images collected by the smart glasses are transformed by the matrix T before being displayed to the human eye, so that an image can be acquired and displayed at the position of the human eye.
  • the first image can be acquired by an image acquisition device on the smart glasses.
  • the occlusion image in the first image can be obtained by image analysis in a processor or an additionally provided processor in the smart glasses, that is, whether there is an occlusion in the user's field of view.
  • a certain knowledge can be obtained by obscuring the image captured by the camera of the smart glasses. No, to determine which part of the image to use for replacement, the process of using smart glasses to identify can also be assisted by the gyroscope device of the smart glasses itself, so that the camera of the smart glasses can recognize whether there is obstruction in the field of view, and use corresponding The image is replaced.
  • the image around the occlusion and the occluded image can be effectively spliced, so that the position of the field of view can be better judged and replaced with the corresponding image.
  • the smart glasses may be see-through smart glasses.
  • the display of the see-through smart glasses allows natural light to pass through, so that the user can view the images displayed by the smart glasses while also seeing the natural real field of view.
  • the image generated in the smart glasses may be superimposed with the target image in the real field of view, and the image processed in the smart glasses may be used to cover the image of the obstruction in a part of the real field of view, thereby achieving the effect of the see-through object.
  • the occluded image can be acquired by one or more image capture devices, wherein at least one of the image capture devices can be unobstructed by the occlusion.
  • the image acquisition device may be various devices capable of acquiring images, such as a camera, a camera, an infrared image detector, and the like.
  • the above image capturing device can be installed in various positions as long as it can capture an image of the user's field of view blocked by the obstructed object, in other words, it needs to cover the position of the field of view blocked by the obstructed object, that is, the blind spot of the user's field of view is covered.
  • the image of the occluded portion can be collected.
  • the image capturing device is disposed at the edge of the roof of the roof, and can cover the dead angle of the window of the window pillar.
  • the image capturing device is disposed on the back side of the obstructing object, so that an image of the user's field of view blocked by the obstructing object can be surely collected.
  • the second image can be acquired using only one image acquisition device.
  • a plurality of image acquisition devices are simultaneously used to simultaneously capture images of different portions of the field of view blocked by the obstruction, and images acquired by different image acquisition devices may be stitched together to generate a field of view including all or the entire obstruction. image.
  • the position of the image capturing device can be relatively fixed with the position of the occlusion object, thereby greatly reducing the amount of calculation of the image processing, and improving the image processing effect.
  • a portion of the second image corresponding to the obstruction image may refer to an image of a user's field of view blocked by the obstruction.
  • the portion of the second image corresponding to the image of the obstruction may be an image of the traffic light.
  • the unobstructed image shows the image behind the occlusion.
  • the image collected by the image capturing device may be processed by a plurality of methods, such as by wireless or wired, the processor may be a processor on the smart glasses, or may be additionally provided. processor.
  • the image acquisition device can collect images of the user's field of view blocked by the occlusion object in real time, and can update the unoccluded image displayed on the smart glasses display in real time.
  • an image of a user's field of view covered by the occluded object is collected by the image capturing device, and the image blocked by the occluded object may not be known in advance.
  • the entire object of the occlusion object can be realized. In this way, it can overcome the shortcomings of the existing smart glasses that can only be used to see the internal structure of the object, but not the whole object, and can also break through the limitations of the existing smart glasses that need to know the internal structure image of the object in advance.
  • FIG. 13 is a schematic structural diagram of an unoccluded image generating unit according to an embodiment of the present invention.
  • the unobstructed image generating unit 230 may include a feature point extracting module 231, a transform matrix generating module 232, an image transforming module 233, and an unoccluded image generating module 234, wherein the modules are sequentially connected.
  • the feature point extraction module 231 is configured to respectively extract a plurality of first feature points and a plurality of second feature points from the image portion other than the occlusion object image in the first image, and the second feature point and the foregoing The first feature points correspond one-to-one.
  • the transformation matrix generation module 232 is configured to calculate an image transformation matrix converted from an imaging perspective of the image capturing device to a user perspective according to all the first feature points and all the second feature points.
  • the image transformation module 233 is configured to perform image transformation on the second image by using the image transformation matrix.
  • the unobstructed image generation module 234 is configured to splicing the portion corresponding to the occlusion object in the image-converted second image and the first image after removing the occlusion image to generate the occlusion image.
  • the first feature point and the second feature point may be points in the corresponding image that can reflect the shape of the object, such as an apex or an inflection point of the object in the image, which may be obtained by a gradient change of the intensity of the point. .
  • which points in the image are used as feature points may be determined as needed.
  • the second feature point is in one-to-one correspondence with the first feature point, and may mean that the second feature point and the first feature point are points on an image corresponding to the same point screen in the user's field of view. The more the number of feature points, the more accurate the image stitching.
  • the pair of feature points (the first feature point and the second feature point appear in pairs) are at least 5 pairs, which is beneficial to the image transformation matrix result. Correct and stable.
  • all first feature points in the image portion other than the occlusion image in the first image may be extracted, all second feature points in the second image may be extracted, and feature vectors of the corresponding feature points may be recorded.
  • the feature value, further, all the first feature points and all the second feature points are matched according to the feature vector and the feature value, and the first feature point and the second feature point that are successfully matched are used to calculate the image transformation.
  • the second image is captured from an imaging angle of view of the image acquisition device, and the first image is captured from a user's perspective, and when the two angles are different, the second image is
  • the common image portion in the first image described above generally has a relative deformation.
  • the second image is image-converted by using the image transformation matrix, and the second image can be converted into an image angle of view by the imaging angle of the image capturing device, and the image deformation problem can be eliminated.
  • the portion corresponding to the occlusion object in the image-converted second image may be obtained by comparing the first image and the second image after the image transformation; and removing the first image The occlusion image; after that, the portion of the image after the image transformation corresponding to the occlusion image is spliced together with the first image after the occlusion image is removed.
  • the second image is image-converted by the image transformation matrix, and an occlusion image is generated by splicing, thereby preventing an angle of view between the imaging angle of the image capturing device and the user's field of view (the smart glasses imaging angle of view).
  • the imaging of the occluded portion is not intuitive enough, causing confusion of the user's field of view, thereby affecting the user's normal field of view.
  • the feature point extraction module 231 may include a feature extraction algorithm module 2311.
  • the feature extraction algorithm module 2311 is configured to respectively extract a plurality of first feature points and a plurality of second feature points from the image portion other than the occlusion object in the first image and the second image by the feature extraction algorithm.
  • the above feature extraction algorithm can extract the inflection point, and the inflection point can refer to a point that has a large difference from other surrounding pixel points. For example, if there is a vertical edge on the image, it is obvious that the left and right sides of the edge have a larger change from the pixel of the line where the edge is located, but this edge cannot be the feature point, that is, the inflection point, because the edge is a line, a line There are too many points on it, and it is impossible to accurately determine which of them can be used as an inflection point. For another example, if the two edges intersect, there must be an intersection, and the intersection is more different in any direction than the pixels around it.
  • the feature point can generally be the position of the sharp corner of the object in the image.
  • the above feature points include the first feature point and the second feature point described above.
  • the unoccluded image generating module 234 is further configured to: execute the partial occlusion image of the edge of the first image after removing the occlusion image.
  • the image portion of the occlusion object is removed, and the occluded edge portion is retained, which can be used as a basis for image splicing to improve the image splicing quality.
  • the occlusion portion is removed, the unobstructed portion can be retained.
  • part of the image where the unoccluded portion is in contact with the occlusion portion can be processed at the time of display to make the display look more natural.
  • the value of the reserved edge portion may use a fixed pixel, or the width may be determined according to the difference between the spliced pixels. In one embodiment, the width of the edge portion is not less than 5 pixels, so that the smoothness of the image display can be facilitated.
  • the edge portion may have a width of 0 to 5 pixels, or the edge of the obstruction may be expanded to make the removed obstruction image larger than the actual obstruction image. Make the stitched image more natural and smoother.
  • FIG. 14 is a schematic structural diagram of an unoccluded image generating unit according to an embodiment of the present invention.
  • the unoccluded image generating unit 230 may further include an initial positioning module 235 connected between the image converting module 233 and the unoccluded image generating module 234.
  • the initial positioning module 235 is configured to: according to the imaging angle and the spatial position information of the image capturing device, between the portion of the second image corresponding to the occlusion object and the first image after removing the occlusion image The stitching position is initially positioned.
  • the positions of the image capturing device and the covering object are preferably relatively fixed, and the occluded image may have a certain relative relationship with the image collected by the image capturing device.
  • the spatial position information may be a relative spatial position of the image capturing device and the obstructing object. At this time, when the position of the obstructing object is found, the information of the relative spatial position may be recorded, and under such conditions, the obstructing object and the image capturing device It can be considered that no relative motion occurs, so the position information of the occlusion object can be located and the image splicing can be performed by using the position information.
  • the image acquisition device in the angular direction collects.
  • the information obtained can be used to replace the occlusion image.
  • the range of the image that needs to search for the corresponding feature points (the first feature point and the corresponding second feature point) in the stitching process can be greatly reduced, thereby improving the image. Processing speed.
  • the gyroscope of the smart glasses itself can be used to provide information such as direction, and the angular relationship between the obstruction and the smart glasses user can be known in advance, that is, whether the user observes in the direction of the obstruction, which is Can be used as an auxiliary basis for displaying perspective images or splicing budgets.
  • the relative positions between the plurality of image capturing devices can be fixed, so that after the image stitching of the plurality of image capturing devices is performed, only the image transform needs to be retained.
  • the matrix can be applied to a new image, and there is no need to perform feature extraction and search for corresponding feature points every time, thereby improving image processing speed.
  • the occlusion image acquisition unit 210 may include an occlusion image acquisition module 211.
  • the occlusion image acquisition module 211 is configured to identify the occlusion image in the first image according to a constant attribute of the occlusion or a graphic mark on the occlusion.
  • the occlusion image may be identified based on the boundary region where the graphic mark is located.
  • the first image can extract the boundary of the image during image processing, so that the entire first image can be divided into many regions, and the region where the constant attribute or the graphic mark is located can be identified as an occlusion, and the image of the region can be replaced.
  • the graphic mark may be a unique mark set on the pre-reclosure to identify the cover.
  • the method of identifying the occlusion image in the first image may include the color, pattern, relative position or feature identification of the occlusion, and the like.
  • the color may refer to the relatively constant color of the obstruction, and the color of the obstruction may be limited to a certain range by acquisition for a period of time, so that the object of the color may be given a certain weight in the determination as an obstruction.
  • the pattern and the feature identifier are essentially the same, but may be identified as conforming to a specific pattern feature, and the contour may be identified as an obstruction after recognition.
  • the relative position may be recorded in a gyroscope manner to record the relative relationship between the obstruction and the user for application and recognition of the obstruction.
  • the method of identifying the occlusion can be an algorithm for object recognition.
  • the obstruction is recognized at an angle.
  • the dead angle of the field of view and the obstruction itself are generally not changed.
  • the recognition of the obstruction can be achieved, and the accuracy of the obstruction identification can be improved.
  • the color of the obstruction is relatively fixed and hardly changes.
  • no obstructions are found in the field of view, and the image taken by the smart glasses camera can be used without performing an image replacement operation.
  • the image taken by the smart glasses camera can be used without performing an image replacement operation.
  • the user can directly observe the lens through the non-display method without displaying it.
  • the user can directly see the natural field of vision through the smart glasses.
  • the smart glasses can be transparent. Smart glasses, such as smart glasses like googleglass and hololens.
  • FIG. 15 is a schematic structural view of a spectacles fluoroscopy apparatus based on smart glasses according to another embodiment of the present invention.
  • the smart glasses-based occlusion viewing device shown in FIG. 12 may further include a first image updating unit 240 connected to the above-described occlusion image generating unit 230.
  • the first image updating unit 240 is configured to: when the screen in the user's field of view changes, the smart glasses update the first image, and the image capturing device re-acquires the second image to update the second image, and is based on The updated first image and the updated second image regenerate the above unoccluded image.
  • the unoccluded image when the occluded natural field of view changes, for example, the user's head is displaced, the unoccluded image can be updated according to the gyro data of the smart glasses itself. Specifically, the first image and the second image described above may be updated, and an unoccluded image and an updated second image are regenerated based on the updated first image. In this way, real-time updating of the unoccluded image can be achieved when the picture in the user's field of view changes.
  • the image capture device can acquire a second image in real time, and can use the second image pair that is re-acquired to update the previous second image when needed.
  • the smart glasses-based occlusion device fluoroscopy device shown in FIG. 12 may further include: a second image updating unit 250 and an occlusion image updating unit 260, which are connected to each other, and the second image updating unit 250 and The above-described unobstructed image generating unit 230 is connected.
  • the second image updating unit 250 is configured to determine, according to the detection data of the gyroscope on the smart glasses and the detection data of the gyroscope at a set position, whether the first image needs to be updated when the viewing angle of the user changes. .
  • the unobstructed image updating unit 260 is configured to update the first image by the smart glasses and regenerate the unoccluded image based on the updated first image.
  • the set position may be various positions outside the smart glasses, such as on a car. According to the detection data of the gyroscope on the smart glasses and the detection data of the gyroscope at the set position, the change of the viewing angle of the user can be determined in various situations, for example, whether the user's viewing angle changes during the running of the vehicle.
  • the smart glasses may be updated.
  • the first image generates an unoccluded image based on the updated first image, so that the unoccluded image can be updated in real time when the user's viewing angle changes.
  • the user's viewing angle is not changed.
  • the first image may not be updated.
  • the second image may be re-acquired or may not be re-acquired, as long as the second image may still include the user's field of view blocked by the obstruction in the updated first image.
  • the image capturing device may be a camera.
  • the second image may be acquired in real time, and the second image may be updated in real time.
  • the second image may be re-acquired and may be used to generate the above. Unobstructed image.
  • FIG. 17 is a schematic structural view of a spectacles fluoroscopy apparatus based on smart glasses according to still another embodiment of the present invention.
  • the smart glasses-based occlusion device fluoroscopy device shown in FIG. 12 may further include a third image updating unit 270 connected to the above-described occlusion image generating unit 230.
  • the third image updating unit 270 is configured to re-acquire the second image by using another image capturing device after the user's viewing angle changes, and regenerate the unoccluded image based on the re-acquired second image.
  • the another image capturing device is an image capturing device for collecting the second image before the change of the viewing angle of the user, and may be the other of the plurality of image capturing devices.
  • the other image capture device can capture an image of the user's field of view blocked by the obstruction within the user's new perspective.
  • the other image acquisition device and the image acquisition device described above may differ in imaging angle and/or position. Depending on the user's perspective, using a different image acquisition device to capture the second image can ensure a wider viewing angle of the user's perspective obscuration.
  • the replacement image acquiring unit 220 is further configured to perform: a difference between a direction angle of the obstruction relative to the user and an imaging viewing angle of the image capturing device is 90 degrees.
  • the difference between the direction angle of the obstructing object relative to the user and the imaging viewing angle of the image capturing device is less than 90 degrees, and the difference between the actual space of the three-dimensional image and the image expressing the two-dimensional information can be overcome, and the perspective is improved. The more accurate the result.
  • the difference between the direction angle of the obstructing object relative to the user and the imaging viewing angle of the image capturing device may be other angles, for example, may be determined according to an angle of view that the image capturing device can collect, for example, may be an image capturing device. 1/2 of the lens angle of view.
  • the viewing angle difference can be 30 degrees or less, so that a relatively ideal image can be obtained.
  • the image capturing device can adopt a 180-degree wide-angle lens, and the angle of view can be close to 90 degrees, and the image captured by the plurality of lenses can be stitched to obtain an image of the panoramic lens.
  • FIG. 18 is a schematic structural diagram of an alternative image acquisition unit according to an embodiment of the present invention. As shown in FIG. 18, the number of the image capturing devices is multiple.
  • the replacement image acquiring unit 220 may include a third image acquiring module 221 and a second image acquiring module 222, which are connected to each other.
  • the third image obtaining module 221 is configured to collect a third image of the user's field of view by each of the image capturing devices.
  • the second image acquisition module 222 is configured to splicing each of the third images together to form the second image of the unobstructed object.
  • the angle of view and/or the position of each of the image capturing devices may be the same, and different third images may be collected.
  • each image may be taken when an image capturing device cannot capture the visual image blocked by the complete occlusion object.
  • the view images occluded by the occlusions collected by the collection device are spliced together to form a complete occlusion occlusion view image for replacing the occlusion image in the first image.
  • the second image formed by the splicing may have a wider field of view image than the third image, and when the screen in the user's field of view changes and/or the user's viewing angle changes, the foregoing
  • the image can still include an image of the field of view blocked by the occlusion object, and the second image may not be updated. If the first image changes, the image corresponding to the updated first image may be directly found in the second image. In order to improve the image processing speed.
  • the smart glasses-based occlusion device fluoroscopy device of the embodiment of the present invention analyzes and recognizes the position and image of the occlusion object in the visual field by using various methods by using the image of the user's field of view collected by the camera of the smart glasses, and uses the external image.
  • the collecting device collects the image of the occluded field of view, and replaces the occluded portion with the image collected by the external detector, and then registers and splices the replaced image with the user's field of view image, thereby using the smart glasses.
  • the image blocked by the obstruction can be seen when observing the obstruction, so that the effect of the see-through obstruction can be generated, thereby effectively removing the dead angle caused by the obstruction in the user's field of view.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

一种基于智能眼镜的遮挡物透视方法及装置,该方法包括:通过智能眼镜采集使用者视野的第一图像并识别所述第一图像中的遮挡物图像(S110);通过至少一图像采集设备采集使用者视野的第二图像(S120);利用所述第二图像中对应所述遮挡物图像的部分替换所述第一图像中的遮挡物图像,拼接生成使用者视野的无遮挡图像(S130)。通过利用所述第二图像中对应遮挡物图像的部分替换第一图像中的遮挡物图像,能够实现透视遮挡物。

Description

基于智能眼镜的遮挡物透视方法及装置 技术领域
本发明涉及人机交互技术领域,尤其涉及一种基于智能眼镜的遮挡物透视方法及装置。
背景技术
在有遮挡物存在的情况下,往往容易形成视野死角,观察者无法获得遮挡物阻挡部分的视野,从而容易引发种种问题。例如,各种驾驶舱中挡风玻璃的立柱会形成视野死角,车门部分也会遮挡观察者的视野。近年来,由于智能眼镜的发展,透视物体成为可能。然而,目前的智能眼镜只能用来透视物体的内部结构,不能透视整个物体,例如遮挡物,而且需要预先知道物体的内部结构图像才能对物体进行透视。
发明内容
本发明提供一种基于智能眼镜的遮挡物透视方法及装置,以解决现有智能眼镜无法透视遮挡物的问题。
本发明提供一种基于智能眼镜的遮挡物透视方法,包括:通过智能眼镜采集使用者视野的第一图像并识别所述第一图像中的遮挡物图像;通过至少一图像采集设备采集使用者视野的第二图像;利用所述第二图像中对应所述遮挡物图像的部分替换所述第一图像中的遮挡物图像,拼接生成使用者视野的无遮挡图像。
本发明还提供一种基于智能眼镜的遮挡物透视装置,包括:遮挡物图像获取单元,用于通过智能眼镜采集使用者视野的第一图像并识别所述第一图像中的遮挡物图像;替换图像获取单元,用于通过至少一图像采集设备采集使用者视野的第二图像;无遮挡图像生成单元,用于利用所述第二图像中对应所述遮挡物图像的部分替换所述第一图像中的遮挡物图像,拼接生成使用者视野的无遮挡图像。
本发明实施例的基于智能眼镜的遮挡物透视方法及装置,利用智能眼镜的摄像头采集到的使用者视野的图像,通过多种方法分析并识别得到视野中遮挡物的位置和图像,利用外置的图像采集设备采集被遮挡的视野的图像,并将被遮挡的部分用外置探测器采集到的图像进行替换,再将替换图像与使用者视野图像进行配准、拼接,如此一来,使 用智能眼镜能够在观察遮挡物的时候能看到被遮挡物遮挡的图像,从而能够产生透视遮挡物的效果,以此可以有效的去除使用者视野内由该遮挡物造成的死角。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。在附图中:
图1是本发明实施例的基于智能眼镜的遮挡物透视方法的流程示意图;
图2是本发明一实施例中拼接生成使用者视野的无遮挡图像的方法流程示意图;
图3是本发明另一实施例中拼接生成使用者视野的无遮挡图像的方法流程示意图;
图4是本发明又一实施例的基于智能眼镜的遮挡物透视方法的流程示意图;
图5是本发明再一实施例的基于智能眼镜的遮挡物透视方法的流程示意图;
图6是本发明另一实施例的基于智能眼镜的遮挡物透视方法的流程示意图;
图7是本发明一实施例中通过多个图像采集设备采集使用者视野的第二图像的方法流程示意图;
图8是本发明一实施例中拼接得到第二图像的方法的流程示意图;
图9是本发明一实施例中智能眼镜采集的第一图像的示意图;
图10是本发明一实施例中图像采集设备采集到的包含对应图9所示遮挡物图像部分的第二图像的示意图;
图11是根据图9和图10所示图像拼接而成的无遮挡物图像的示意图;
图12是本发明一实施例的基于智能眼镜的遮挡物透视装置的结构示意图;
图13是本发明一实施例中无遮挡图像生成单元的结构示意图;
图14是本发明一实施例中无遮挡图像生成单元的结构示意图;
图15是本发明另一实施例的基于智能眼镜的遮挡物透视装置的结构示意图;
图16是本发明另一实施例的基于智能眼镜的遮挡物透视装置的结构示意图;
图17是本发明又一实施例的基于智能眼镜的遮挡物透视装置的结构示意图;
图18是本发明一实施例的替换图像获取单元的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚明白,下面结合附图对本发明实施例做进一步详细说明。在此,本发明的示意性实施例及其说明用于解释本发明,但并不作为对本发明的限定。本发明实施例中各步骤的顺序仅用于示意性说明方法实施的过程,步骤顺序可根据需要做适当调整,本发明对此不作限定。
本发明的基于智能眼镜的遮挡物透视方法,在佩戴智能眼镜的状态下,创造性地借助辅助探测器,例如外置摄像头,并利用该辅助探测器采集被遮挡部分的图像,或者说该探测器可以覆盖到使用者由于遮挡物的遮挡而造成的视野死角,然后将被遮挡部分和探测器采集到的图像进行拼接,从而得到视野死角的图像。将视野内的遮挡物体透明化处理,能够达到消除视野死角的目的。
图1是本发明实施例的基于智能眼镜的遮挡物透视方法的流程示意图。如图1所示,本发明实施例的基于智能眼镜的遮挡物透视方法,可包括步骤:
S110:通过智能眼镜采集使用者视野的第一图像并识别上述第一图像中的遮挡物图像;
S120:通过至少一图像采集设备采集使用者视野的第二图像;
S130:利用上述第二图像中对应上述遮挡物图像的部分替换上述第一图像中的遮挡物图像,拼接生成使用者视野的无遮挡图像。
在上述步骤S110中,该智能眼镜可以是各种不同的智能眼镜,其上一般设有图像采集设备,例如摄像头。上述使用者通常是指佩戴上述智能眼镜的用户,由于智能眼镜的图像采集设备所在位置与人眼位置的距离很近,所以使用者能够看到的图像,智能眼镜也可以看到。遮挡物可使使用者的视野产生盲区,被遮挡物遮挡的部分使用者无法观察到,例如车窗的立柱会挡住车内人员的部分视野。
一个实施例中,可近似认为使用者和智能眼镜二者看到的图像相同,可以忽略其二者之间的图像简单形变,以此可以简化图像处理过程。
另一个实施例中,使用者视野即自然视野与智能眼镜的图像采集设备二者所看到图像的差异可以基于人眼与智能眼镜的图像采集设备的位置关系进行校正或修正,以此可以提高智能眼镜显示屏上图像的真实性,可以提升使用者体验。
一个实施例中,使用者视野即自然视野与智能眼镜的图像采集设备二者所看到图像的差异,可以使用双摄像机进行相机标定的方法进行校正。具体而言,可将人眼位置放置一台摄像机,将该摄像机采集到的标定板图像与智能眼镜摄像头采集到的标定板图像 进行对比,得到一个图像的变换矩阵T。将所有经过智能眼镜采集到的图像在显示到人眼前都经过矩阵T的变换,从而可以得到一个近似在人眼位置采集到并显示图像。
该第一图像可以通过该智能眼镜上的图像采集设备采集。该第一图像中的遮挡物图像可以通过该智能眼镜中的处理器或额外设置的处理器进行图像分析获得,即可以判断使用者视野内是否存在遮挡物。使用者视野内可以存在多个遮挡物,上述遮挡物图像可以是多个。一个实施例中,可以通过对智能眼镜的摄像头拍摄到的遮挡物进行一定的识别,从而判断使用哪部分的图像进行替换,使用智能眼镜识别的过程也可以通过智能眼镜自身的陀螺仪设备辅助参考,以此智能眼镜的摄像头可以识别出视野内是否存在遮挡物,并使用相应的图像进行替换。由于整个视野的视野死角可被图像采集装置覆盖,所以遮挡物周边的图像和被遮挡的图像可以有效的进行拼接,这样可以更好的判断出视野所在的位置并使用相应的图像进行替换。
一个实施例中,上述智能眼镜可以是可透视型智能眼镜。该可透视型智能眼镜的显示屏可允许自然光穿透,以此可以在保证使用者观看智能眼镜显示图像的同时也可以看到自然真实视野。上述智能眼镜中产生的图像可与真实视野中的目标图像发生叠加,使用智能眼镜中处理过的图像覆盖部分真实视野中的遮挡物图像,以此可以达到透视遮挡物的效果。被遮挡的图像可以由一个或者多个图像采集装置采集获得,其中,至少有1个图像采集装置可以不被遮挡物遮挡。
在上述步骤S120中,图像采集设备可以是各种能够采集图像的设备,例如相机、摄像头、红外图像探测器等。上述图像采集设备可以安装在各种不同位置,只要能拍摄到被遮挡物遮挡的使用者视野的图像即可,换言之,需要覆盖被遮挡物遮挡的视野的位置,即覆盖到使用者的视野死角,能够采集到被遮挡部分的图像,例如上述图像采集设备设置在车顶的边缘的位置,可以覆盖车窗立柱的视野死角的。
较佳地,上述图像采集设备设置在上述遮挡物的背面,以此可以保证被遮挡物遮挡的使用者视野的图像一定可以被采集到。
一个实施例中,利用一个上述图像采集设备可以拍到所有遮挡物所遮挡的视野的图像时,可以仅使用一个图像采集设备采集上述第二图像。另一实施例中,同时使用多个图像采集设备同时拍摄遮挡物所遮挡的视野的不同部分的图像,不同图像采集设备所采集图像可拼接在一起,生成包含所有或整个遮挡物所遮挡视野的图像。
一个实施例中,上述图像采集设备的位置可以和遮挡物的位置相对固定,以此可以极大减少图像处理的运算量,可以提高图像处理效果。
在上述步骤S130中,第二图像中对应上述遮挡物图像的部分可以指遮挡物所遮挡的使用者视野的图像。例如,车窗立柱遮挡住使用者视野内的交通指示灯,则第二图像中对应上述遮挡物图像的部分可以是交通指示灯的图像。该无遮挡图像可知透视遮挡物后的图像。上述图像采集设备采集的图像可以通过多种方式传送处理器处理,例如通过无线或有线方式,该处理器可以是该智能眼镜上的处理器,或者是额外设置的处理器。图像采集设备可以实时采集被遮挡物遮挡的使用者视野的图像,并可实时更新智能眼镜显示屏上显示的无遮挡图像。
本发明实施例中,通过图像采集设备采集包括被遮挡物遮挡的使用者视野的图像,可不需要预先知道被遮挡物遮挡住的图像。通过利用上述第二图像中对应遮挡物图像的部分替换上述第一图像中的遮挡物图像,并拼接生成使用者视野的无遮挡图像,可以实现透视遮挡物整个物体。如此一来,可以克服现有智能眼镜只能用来透视物体内部结构,而不能透视整个物体的缺点,还可以突破现有智能眼镜需要预先知道物体内部结构图像的局限性。
图2是本发明一实施例中拼接生成使用者视野的无遮挡图像的方法流程示意图。如图2所示,在上述步骤S130中,利用上述第二图像中对应上述遮挡物图像的部分替换上述第一图像中的遮挡物图像,拼接生成使用者视野的无遮挡图像的方法,可包括步骤:
S131:从上述第一图像中除遮挡物图像之外的图像部分和上述第二图像分别提取多个第一特征点和多个第二特征点,上述第二特征点与上述第一特征点一一对应;
S132:根据所有上述第一特征点和所有上述第二特征点,计算得到由上述图像采集设备的成像视角转换为使用者视角的图像变换矩阵;
S133:利用上述图像变换矩阵对上述第二图像进行图像变换;
S134:将图像变换后的上述第二图像中对应上述遮挡物图像的部分与去除上述遮挡物图像后的上述第一图像拼接在一起生成上述无遮挡图像。
在上述步骤S131中,上述第一特征点和第二特征点可以是相应图像中能够反映物体形状的点,例如图像中的物体的顶点或拐点,可以由该点的强度的梯度变化得到。具体将图像中的哪些点作为特征点,可视需要而定。上述第二特征点与上述第一特征点一一对应,可指上述第二特征点与上述第一特征点为使用者视野中同一点画面所对应的图像上的点。特征点的数量越多越,图像拼接的越准确,一个实施例中,成对的特征点(第一特征点和第二特征点成对出现)至少为5对,这有利于图像变换矩阵结果的正确和稳定。
一个实施例中,可以提取第一图像中除遮挡物图像之外的图像部分中的所有第一特征点,可以提取第二图像中的所有第二特征点,并可记录相应特征点的特征向量和特征值,进一步可通过提取的所有第一特征点和所有第二特征点依据特征向量和特征值进行匹配,并可将匹配成功的第一特征点和第二特征点用于计算上述图像变换矩阵,即最初提取的第一特征点和第二特征点数量可不同,例如第二特征点数量多于第一特征点数量。
在上述步骤S132~S133中,上述第二图像是从上述图像采集设备的成像视角进行拍摄,上述第一图像是从使用者视角拍摄,该两视角不同时,上述第二图像和上述第一图像中的共同图像部分一般会有相对变形。利用上述图像变换矩阵对该第二图像进行图像变换,可以该第二图像由像采集设备的成像视角转换为使用者视角,可以消除图像变形问题。
在上述步骤S134中,可以通过对比上述第一图像和图像变换后的上述第二图像,得到图像变换后的上述第二图像中对应上述遮挡物图像的部分;去除上述第一图像中的遮挡物图像;之后,再将图像变换后的第二图像中对应上述遮挡物图像的部分与去除遮挡物图像后的第一图像拼接在一起。
特征提取算法可有很多种,例如,对图片进行多次虚化处理,将每次虚化后的图片与虚化前图进行差值得到一个向量矩阵。这样可以堆叠形成3*3*3的一个立方体中查找。中心点比周边的点都大或者都小的值就为特征点。同时记录一下特征点的特征向量和该特征点在各个方向的梯度作为该特征点的特征值。如果两特征点的特征值和特征向量均相同,同时用可认为这两个特征点是同一个点即相互对应。这个变换矩阵可是一个3*3的矩阵。由特征点对应解方程求最小方差可以得到这个3*3的矩阵。
本实施例中,通过上述图像变换矩阵对上述第二图像进行图像变换,在拼接生成无遮挡图像,以此,可以防止在图像采集设备的成像视角与使用者视野的角度(智能眼镜成像视角)不一致的情况下,被遮挡部分的成像不够直观,造成使用者视野的混乱,从而影响使用者的正常视野的问题。
一个实施例中,在上述步骤S131中,从上述第一图像中的除遮挡物图像之外的图像部分和上述第二图像分别提取多个第一特征点和多个第二特征点的方法,具体实施方式可以是:通过特征提取算法从上述第一图像中的除遮挡物图像之外的图像部分和上述第二图像分别提取多个第一特征点和多个第二特征点。
上述特征提取算法可以提取出拐点,拐点可指与周边其他的像素点变化差异较大的点。举例来说,如果图像上存在一条竖直的边,显然边的左右两边都与边所在的线的像素变化较大,但是这条边不能成为特征点即拐点,因为边是一条线,一条线上有太多的点,无法准确判断其中哪个点可以作为拐点。再例如,如果两条边相交,那么就必然有一个交点,并且这个交点与它周边的像素相比在任何方向上差异都较大。并且这个交点是唯一的一个,它并不会沿着边产生多个拐点,所以,可以选取这个交点作为特征点,也就是拐点。由此可知,特征点一般可以是图像中物体的尖角的位置。上述特征点包括上述第一特征点和第二特征点。
一个实施例中,在上述步骤S134中,去除上述遮挡物图像后的上述第一图像的边缘的设定像素宽度区域包含部分的上述遮挡物图像。本实施例中,将遮挡物的图像部分去除,同时保留被遮挡的边缘部分,可以作为图像拼接的依据,提高图像拼接质量。在去除掉遮挡部分时,未遮挡部分可被保留下来,在图像的拼接时,未遮挡部分与遮挡部分相接的部分图像可在显示的时候做一定的处理使显示看起来更自然。该保留的边缘部分的值可以使用固定的像素,或可以根据相拼接的像素之间的差值来决定宽度。一个实施例中,上述边缘部分的宽度不少于5个像素,以此可以便于图像显示的平滑。
另一个实施例中,上述边缘部分的宽度可以为0~5个像素,或者可将遮挡物的边缘进行扩张操作,使去掉的遮挡物图像比实际的遮挡物图像可大一些,以此,可使拼接后的图像更自然且更平滑。
图3是本发明另一实施例中拼接生成使用者视野的无遮挡图像的方法流程示意图。如图3所示,图2所示的拼接生成使用者视野的无遮挡图像的方法,在将图像变换后的上述第二图像中对应上述遮挡物图像的部分与去除上述遮挡物图像后的上述第一图像拼接在一起生成上述无遮挡图像(步骤S134)之前,还可包括步骤:
S135:根据上述图像采集设备的成像角度及空间位置信息,对图像变换后的上述第二图像中对应上述遮挡物图像的部分与去除上述遮挡物图像后的第一图像之间的拼接位置进行初始定位。本实施例中,参考外置图像采集设备的角度和位置等信息作为图像拼接的初始定位信息可以加快图像拼接的速度。
在上述步骤S135中,上述图像采集设备与遮挡物的位置较佳地是相对固定的,被遮挡的图像可与图像采集设备采集到的图像存在一定的相对关系。该空间位置信息可以是图像采集设备和遮挡物的相对空间位置,此时,当发现遮挡物的位置后,可以将该相对空间位置的信息记录下来,在此条件下,遮挡物和图像采集设备之间可以认为是不发生 相对运动的,所以可以定位到遮挡物的位置信息并利用该位置信息进行图像拼接,例如,某个角度方向上存在遮挡物,那么这个角度方向上的图像采集设备采集到的信息可以用来替换遮挡物图像。如此一来,在使用智能眼镜做图像拼接的操作的时候,拼接过程中需要搜寻对应特征点(第一特征点和对应的第二特征点)的图像的范围可以大大减小,从而可以提高图像处理速度。一个实施例中,同时可以利用智能眼镜本身的陀螺仪提供方向等的信息,可以预先知晓遮挡物与智能眼镜使用者存在的角度关系,也就是说使用者是否朝遮挡物的方向观察了,这可以作为是否显示透视图像或者拼接预算的一个辅助依据。在一实施例中,可同时存在多个图像采集设备的情况下,多个图像采集设备之间的相对位置可以固定,以此进行一次多个图像采集设备的图像拼接以后,只需要保留图像变换矩阵并应用到新的图像即可,无需要每次都对图像进行特征提取和寻找对应特征点的操作,以此可以提高图像处理速度。
一个实施例中,在上述步骤S110中,通过智能眼镜识别上述第一图像中的遮挡物图像的方法,具体实施方式可以是:根据遮挡物的恒定属性或遮挡物上的图形标记,识别上述第一图像中的遮挡物图像。
具体地,例如可根据图形标记所在的边界区域识别遮挡物图像。第一图像在图像处理过程中可提取图像的边界,这样可以将整个第一图像划分为很多区域,恒定属性或者图形标记所在的区域可被认定为遮挡物,该区域的图像可被替换。
本实施例中,该图形标记可以是预先再遮挡物上设置的唯一标记,用以识别该遮挡物。识别第一图像中的遮挡物图像的方法可包括遮挡物的颜色、图案、相对位置或者特征标识等。颜色可指遮挡物的自身颜色相对恒定,可以通过一段时间的采集将遮挡物的颜色限定与某个范围这样对该颜色的物体在作为遮挡物的判定中可以给予一定加权。图案和特征标识本质相同,可是指符合特定图案特征作为识别,识别后可将所在的轮廓识别为遮挡物。相对位置可是采用陀螺仪的方式记录遮挡物和使用者之间的相对关系从而应用与对遮挡物的识别中。对遮挡物的识别的方法可以是物体识别的算法。
通过智能眼镜记录遮挡物体的特征,或者在遮挡物上使用图形标记,或者通过记录遮挡物的空间位置,例如通过智能眼镜陀螺仪记录遮挡物与智能眼镜所成角度,从而可在智能眼镜朝向该角度时识别出该遮挡物。视野死角和遮挡物本身一般不太发生变化,通过对遮挡物本身属性比较恒定这一特点来达到对遮挡物得识别,可以提高遮挡物识别的准确度。例如,遮挡物的颜色比较固定,几乎不发生变化,在自然视野变化时,遮挡 物的自身的图像也不变化等特征识别标定出遮挡物或视野死角。自然视野中的遮挡物部分可被记录下来并被移除掉。
一个实施例中,视野内未发现遮挡物,可无需进行图像的替换操作,可以使用智能眼镜摄像头所拍摄的图案。如果是透过式智能眼镜也可以采用不显示的方法直接让使用者透过镜片直接观察而不进行显示,使用者可以透过智能眼镜直接看到自然视野,该智能眼镜可以是透过式的智能眼镜,例如类似googleglass和hololens的智能眼镜。
图4是本发明又一实施例的基于智能眼镜的遮挡物透视方法的流程示意图。如图4所示,图1所示的基于智能眼镜的遮挡物透视方法,还可包括步骤:
S140:在使用者视野内的画面发生变化时,上述智能眼镜更新上述第一图像,所述图像采集设备重新采集所述第二图像以更新所述第二图像,并基于更新后的上述第一图像和更新后的所述第二图像重新生成上述无遮挡图像。
本实施例中,当被遮挡的自然视野发生变化的时候,例如使用者头部发生位移,可以依据智能眼镜自身的陀螺仪数据调整更新上述无遮挡图像。具体地,可以更新上述第一图像和上述第二图像,并基于更新后的第一图像和更新后的第二图像,利用上述步骤S120~S130重新生成无遮挡图像。如此一来,可以在使用者视野内的画面发生变化时实现无遮挡图像的实时更新。一个实施例中,上述图像采集设备可实时采集第二图像,可在需要时利用重新采集的第二图像对更新之前的第二图像。
一个实施例中,当智能眼镜使用者的视角发生变化的时候,可先判断新视野中是否存在遮挡物,再重复上述必要步骤。判断视野内遮挡物的方法可以是,首先,使用图像采集设备对图像进行相对完整的拼接,形成一幅完整图像,将此图像作为模板,参考智能眼镜拍摄到的图像和智能眼镜自身的陀螺仪信息,选出模板上的部分图像区域作为视野窗口,对比图像采集设备采集到的视野窗口和智能眼镜自身拍摄到的图像,判断出遮挡物。使用图像采集设备采集到的图像,对遮挡图像进行替换。
图5是本发明再一实施例的基于智能眼镜的遮挡物透视方法的流程示意图。如图5所示,图1所示的基于智能眼镜的遮挡物透视方法,还可包括步骤:
S150:在使用者的视角发生变化时,上述智能眼镜依据上述智能眼镜上的陀螺仪的检测数据和一设定位置的陀螺仪的检测数据确定是否需要更新上述第一图像;
S160:若是,通过上述智能眼镜更新上述第一图像,并基于更新后的上述第一图像重新生成上述无遮挡图像。
在上述步骤S150中,设定位置可以是智能眼镜外的各个位置,例如车上。根据智能眼镜上的陀螺仪的检测数据和该设定位置的陀螺仪的检测数据可以在各种情形下确定使用者的视角变化情况,例如在车行驶的过程中确定使用者视角是否发生变化。
在上述步骤S160中,当根据智能眼镜上的陀螺仪的检测数据和该设定位置的陀螺仪的检测数据判断使用者的视角变化超过一设定角度时,可通过智能眼镜更新该第一图像,并基于更新后的第一图像生成无遮挡图像,以此可以在使用者视角发生变化时,实时更新无遮挡图像。
在其他实施例中,若根据智能眼镜上的陀螺仪的检测数据和该设定位置的陀螺仪的检测数据判断使用者的视角变化不超过上述设定角度,可近似认为使用者视角未发生变化,则可不更新第一图像。一个实施例中,上述图像采集设备可以是摄像机,此时,可以实时采集第二图像,并可实时更新上述第二图像,第二图像可以是一直被重新采集,并可视需要用于生成上述无遮挡物图像。
本实施例中,第二图像可以重新采集,也可以不重新采集,只要第二图像中仍可包括被更新后的第一图像中的遮挡物所遮挡的使用者视野即可。
图6是本发明另一实施例的基于智能眼镜的遮挡物透视方法的流程示意图。如图6所示,图1所示的基于智能眼镜的遮挡物透视方法,还可包括步骤:
S170:在使用者的视角发生变化后,使用另一图像采集设备重新采集上述第二图像,并基于重新采集的上述第二图像重新生成上述无遮挡图像。
本实施例中,该另一图像采集设备是相对于使用者的视角发生变化前用于采集上述第二图像而言的图像采集设备,可以是上述多个图像采集设备中的另一个。该另一图像采集设备可以采集到使用者的新的视角内遮挡物所遮挡的使用者视野的图像。该另一图像采集设备和上述图像采集设备在成像角度和/或位置上可以不同。根据使用者视角的不同,使用不同的图像采集设备采集第二图像,可以保证更多中使用者视角下均可以透视遮挡物。
一个实施例中,图1所示的基于智能眼镜的遮挡物透视方法,上述遮挡物相对于使用者的方向角与上述图像采集设备的成像视角之差小于90度。
本实施例中,遮挡物相对于使用者的方向角与上述图像采集设备的成像视角之差小于90度,可以克服由于三维的实际的空间和表达二维信息的图像之间的差异,提高透视结果的越准确度。其他实施例中,遮挡物相对于使用者的方向角与上述图像采集设备的成像视角之差可以是其他角度,例如可根据图像采集设备可采集的视场角确定,例如可 以是图像采集设备的镜头视角的1/2。一个实施例中,该视角差可以是30度及以下,以此可获得比较理想的图像。另一实施例中,图像采集设备可采用180度广角镜头,此时该视角差可接近达90度,可通过多个镜头采集的图像拼接得到全景镜头的图像。
图7是本发明一实施例中通过多个图像采集设备采集使用者视野的第二图像的方法流程示意图。上述图像采集设备的个数为多个,如图7所示,通过多个图像采集设备采集使用者视野的第二图像的方法,可包括步骤:
S121:通过各上述图像采集设备采集使用者视野的第三图像;
S122:将各上述第三图像拼接在一起形成无遮挡物的上述第二图像。
在上述步骤S121中,各上述图像采集设备的视角和/或位置可以各部相同,可以采集不同的上述第三图像。
在上述步骤S122中,通过将各第三图像拼接在一起形成无遮挡物的第二图像,可以在一个图像采集设备不能采集完整的遮挡物遮挡的视野图像时,将各个图像采集设备采集的部分的遮挡物遮挡的视野图像拼接在一起,构成完整的遮挡物遮挡的视野图像,从而用于替换上述第一图像中的遮挡物图像。
另一实施例中,拼接形成的上述第二图像可比一幅上述第三图像具有更宽广的视野的图像,在使用者视野内的画面发生变化和/或使用者视角发生变化时,只要上述第二图像仍能够包括遮挡物遮挡的视野的图像,可以不用更新上述第二图像,若第一图像发生变化,可直接在该第二图像中找到与更新后的第一图像对应位置的图像即可,以此可以提高图像处理速度。
图8是本发明一实施例中拼接得到第二图像的方法的流程示意图。如图8所示,在上述步骤S122中,将各上述第三图像拼接在一起形成无遮挡物的上述第二图像的方法,可包括步骤:
S1221:从各上述第三图像中除遮挡物图像之外的图像部分分别提取多个第三特征点,不同上述第三图像中第三特征点一一对应;
S1222:根据不同上述第三图像中第三特征点,计算得到由其他图像采集设备的成像视角转换为其中一个图像采集设备的成像视角的图像转换矩阵;
S1223:利用上述图像转换矩阵对上述其他图像采集设备所采集的第三图像进行图像变换;
S1224:将图像变换后的第三图像与上述其中一个图像采集设备采集的第三图像拼接,以得到对应上述第一图像中整个上述遮挡物图像的图像。
本实施例中,具体实现方法可与图2所示的拼接方法类似,例如在上述步骤S1221中,可以采用特征点提取算法提取上述第三特征点。在上述步骤S1224中,可利用各待拼接图像中的对应遮挡物图像的部分替换遮挡物图像,剪裁、拼接构成对应上述第一图像中整个遮挡物图像的图像。
一个具体实施例中,透视遮挡物方法可以包括步骤:
(1)使用智能眼镜对遮挡物图像进行识别,通过对目标物体外部标定的识别,判断出需要被透视的遮挡物,或者通过智能眼镜自身的陀螺仪记录遮挡物的方向,或者通过对遮挡物自身的识别达到标定出遮挡物的目的。
(2)使用外置探测器采集到被遮挡的图像。
(3)在外置图像和存在遮挡物的自然视野中寻找对应特征点。
(4)求出图像变换矩阵和经变换的外置探测器图像。
(5)将图像叠加到原存在遮挡物的图像中。
(6)使用新图像叠加或者替换遮挡物部分图像。
(7)在自然视野发生变化时,重复以上操作,使被遮挡物上叠加的图像根据被遮挡的内容不停发生变化。
(8)智能眼镜使用者视野发生变化时,先判断是否存在遮挡物,如果不存在遮挡物则不进行图像替换,如果存在遮挡物则重复进行(1)-(7)操作。
(9)使用全部探测器所得图像进行拼接后的图像作为模板,将智能眼镜所拍摄图像和智能眼镜陀螺仪信息数据作为窗口在模板上搜索,得到被遮挡物遮挡的对应图像完成替换。
本实施例的方法,视野死角和遮挡物遮挡住的部分可能是实时变化的,这部分数据可具有实时性;一种可透视智能眼镜及其透视方法的透视结果是物体的内部结构,该内部结构可是相对固定的,可无实时性,基本上也不会随时间的变化而变化。本发明可以应用于交通、军事视野死角有较大危害的场所,同时由于可以实时的透视遮挡的物体,从而保证了结果显示的准确和可用。
图9是本发明一实施例中智能眼镜采集的第一图像的示意图。智能眼镜看到的图像可以近似认为是使用者的自然视野内看到的图像。如图9所示,智能眼镜采集的第一图像300中包含遮挡物图像301和非遮挡物图像302。使用者视野的盲区(对应遮挡物图像301)在视野的右侧。其他实施例中,盲区也可能在视野的左侧、上方、下方、周边 环绕或者中心等。在第一图像300中除遮挡物图像301以外的部分即非遮挡物图像302中,可以选择第一特征点A1、B1、C1、D1、E1、F1,该些特征点为拐点或交点。
图10是本发明一实施例中图像采集设备采集到的包含对应图9所示遮挡物图像部分的第二图像的示意图。如图10所示,采集到的第二图像400既包含对应图9中遮挡物图像301的图像,也包含至少部分地对应上述非遮挡物图像302的图像。第二图像400中可以找到对应第一特征点A1、B1、C1、D1、E1、F1的第二特征点A2、B2、C2、D2、E2、F2。第二图像400中的五角星图像相对于图9所示的五角星图像有所变形,且对应第一特征点C1的第二特征点C2位置有所遮挡。只要第二图像400中的遮挡物图像不包括对应第一图像300中遮挡物图像301的部分即可。
图9中的第一特征点A1、B1、C1、D1、E1、F1和图10中的第二特征点A2、B2、C2、D2、E2、F2并不一定能很好地重合,对图10进行图像变换后,可以使第一特征点A1、B1、C1、D1、E1、F1和第二特征点A2、B2、C2、D2、E2、F2很好地重合在一起,以便实现图像拼接,使拼接后的图像符合人眼的自然视野。
其他实施例中,可以使用多个图像采集设备进行图像的采集,从而保证对被遮挡视野和自然视野的完全覆盖。不同图像采集设备得到的图像之间可以采用图像拼接的方式将图像连接到一起得到上述第二图像400。
图11是根据图9和图10所示图像拼接而成的无遮挡物图像的示意图。如图11所示,利用图像变换后的第二图像400中对应遮挡物图像301的部分替换第一图像300中的遮挡物图像301,可以得到无遮挡物图像500,其中替换部分图像501用虚线视出。图像拼接起来形成一幅完整图像,或者也可以将被遮挡物遮挡住的图像使用虚拟的图像显示出来,视野死角可通过这种手段去除。最后将图像采集设备采集到的被遮挡部分的图像拼接到现有视野范围的图像上完成对遮挡物的透视。
本发明实施例的基于智能眼镜的遮挡物透视方法,利用智能眼镜的摄像头采集到的使用者视野的图像,通过多种方法分析并识别得到视野中遮挡物的位置和图像,利用外置的图像采集设备采集被遮挡的视野的图像,并将被遮挡的部分用外置探测器采集到的图像进行替换,再将替换图像与使用者视野图像进行配准、拼接,如此一来,使用智能眼镜能够在观察遮挡物的时候能看到被遮挡物遮挡的图像,从而能够产生透视遮挡物的效果,以此可以有效的去除使用者视野内由该遮挡物造成的死角。
基于与图1所示的基于智能眼镜的遮挡物透视方法相同的发明构思,本申请实施例还提供了一种基于智能眼镜的遮挡物透视装置,如下面实施例上述。由于该基于智能眼 镜的遮挡物透视装置解决问题的原理与基于智能眼镜的遮挡物透视方法相似,因此该基于智能眼镜的遮挡物透视装置的实施可以参见基于智能眼镜的遮挡物透视方法的实施,重复之处不再赘述。
图12是本发明一实施例的基于智能眼镜的遮挡物透视装置的结构示意图。如图12所示,本发明实施例的基于智能眼镜的遮挡物透视装置,可包括:遮挡物图像获取单元210、替换图像获取单元220及无遮挡图像生成单元230,上述各单元顺序连接。
遮挡物图像获取单元210用于通过智能眼镜采集使用者视野的第一图像并识别上述第一图像中的遮挡物图像。
替换图像获取单元220用于通过至少一图像采集设备采集使用者视野的第二图像。
无遮挡图像生成单元230用于利用上述第二图像中对应上述遮挡物图像的部分替换上述第一图像中的遮挡物图像,拼接生成使用者视野的无遮挡图像。
在遮挡物图像获取单元210中,该智能眼镜可以是各种不同的智能眼镜,其上一般设有图像采集设备,例如摄像头。上述使用者通常是指佩戴上述智能眼镜的用户,由于智能眼镜的图像采集设备所在位置与人眼位置的距离很近,所以使用者能够看到的图像,智能眼镜也可以看到。遮挡物可使使用者的视野产生盲区,被遮挡物遮挡的部分使用者无法观察到,例如车窗的立柱会挡住车内人员的部分视野。
一个实施例中,可近似认为使用者和智能眼镜二者看到的图像相同,可以忽略其二者之间的图像简单形变,以此可以简化图像处理过程。
另一个实施例中,使用者视野即自然视野与智能眼镜的图像采集设备二者所看到图像的差异可以基于人眼与智能眼镜的图像采集设备的位置关系进行校正或修正,以此可以提高智能眼镜显示屏上图像的真实性,可以提升使用者体验。
一个实施例中,使用者视野即自然视野与智能眼镜的图像采集设备二者所看到图像的差异,可以使用双摄像机进行相机标定的方法进行校正。具体而言,可将人眼位置放置一台摄像机,将该摄像机采集到的标定板图像与智能眼镜摄像头采集到的标定板图像进行对比,得到一个图像的变换矩阵T。将所有经过智能眼镜采集到的图像在显示到人眼前都经过矩阵T的变换,从而可以得到一个近似在人眼位置采集到并显示图像。
该第一图像可以通过该智能眼镜上的图像采集设备采集。该第一图像中的遮挡物图像可以通过该智能眼镜中的处理器或额外设置的处理器进行图像分析获得,即可以判断使用者视野内是否存在遮挡物。使用者视野内可以存在多个遮挡物,上述遮挡物图像可以是多个。一个实施例中,可以通过对智能眼镜的摄像头拍摄到的遮挡物进行一定的识 别,从而判断使用哪部分的图像进行替换,使用智能眼镜识别的过程也可以通过智能眼镜自身的陀螺仪设备辅助参考,以此智能眼镜的摄像头可以识别出视野内是否存在遮挡物,并使用相应的图像进行替换。由于整个视野的视野死角可被图像采集装置覆盖,所以遮挡物周边的图像和被遮挡的图像可以有效的进行拼接,这样可以更好的判断出视野所在的位置并使用相应的图像进行替换。
一个实施例中,上述智能眼镜可以是可透视型智能眼镜。该可透视型智能眼镜的显示屏可允许自然光穿透,以此可以在保证使用者观看智能眼镜显示图像的同时也可以看到自然真实视野。上述智能眼镜中产生的图像可与真实视野中的目标图像发生叠加,使用智能眼镜中处理过的图像覆盖部分真实视野中的遮挡物图像,以此可以达到透视遮挡物的效果。被遮挡的图像可以由一个或者多个图像采集装置采集获得,其中,至少有1个图像采集装置可以不被遮挡物遮挡。
在替换图像获取单元220中,图像采集设备可以是各种能够采集图像的设备,例如相机、摄像头、红外图像探测器等。上述图像采集设备可以安装在各种不同位置,只要能拍摄到被遮挡物遮挡的使用者视野的图像即可,换言之,需要覆盖被遮挡物遮挡的视野的位置,即覆盖到使用者的视野死角,能够采集到被遮挡部分的图像,例如上述图像采集设备设置在车顶的边缘的位置,可以覆盖车窗立柱的视野死角的。
较佳地,上述图像采集设备设置在上述遮挡物的背面,以此可以保证被遮挡物遮挡的使用者视野的图像一定可以被采集到。
一个实施例中,利用一个上述图像采集设备可以拍到所有遮挡物所遮挡的视野的图像时,可以仅使用一个图像采集设备采集上述第二图像。另一实施例中,同时使用多个图像采集设备同时拍摄遮挡物所遮挡的视野的不同部分的图像,不同图像采集设备所采集图像可拼接在一起,生成包含所有或整个遮挡物所遮挡视野的图像。
一个实施例中,上述图像采集设备的位置可以和遮挡物的位置相对固定,以此可以极大减少图像处理的运算量,可以提高图像处理效果。
在无遮挡图像生成单元230中,第二图像中对应上述遮挡物图像的部分可以指遮挡物所遮挡的使用者视野的图像。例如,车窗立柱遮挡住使用者视野内的交通指示灯,则第二图像中对应上述遮挡物图像的部分可以是交通指示灯的图像。该无遮挡图像可知透视遮挡物后的图像。上述图像采集设备采集的图像可以通过多种方式传送处理器处理,例如通过无线或有线方式,该处理器可以是该智能眼镜上的处理器,或者是额外设置的 处理器。图像采集设备可以实时采集被遮挡物遮挡的使用者视野的图像,并可实时更新智能眼镜显示屏上显示的无遮挡图像。
本发明实施例中,通过图像采集设备采集包括被遮挡物遮挡的使用者视野的图像,可不需要预先知道被遮挡物遮挡住的图像。通过利用上述第二图像中对应遮挡物图像的部分替换上述第一图像中的遮挡物图像,并拼接生成使用者视野的无遮挡图像,可以实现透视遮挡物整个物体。如此一来,可以克服现有智能眼镜只能用来透视物体内部结构,而不能透视整个物体的缺点,还可以突破现有智能眼镜需要预先知道物体内部结构图像的局限性。
图13是本发明一实施例中无遮挡图像生成单元的结构示意图。如图13所示,上述无遮挡图像生成单元230可包括:特征点提取模块231、变换矩阵生成模块232、图像变换模块233及无遮挡图像生成模块234,上述各模块顺序连接。
特征点提取模块231用于从上述第一图像中除遮挡物图像之外的图像部分和上述第二图像分别提取多个第一特征点和多个第二特征点,上述第二特征点与上述第一特征点一一对应。
变换矩阵生成模块232用于根据所有上述第一特征点和所有上述第二特征点,计算得到由上述图像采集设备的成像视角转换为使用者视角的图像变换矩阵。
图像变换模块233用于利用上述图像变换矩阵对上述第二图像进行图像变换。
无遮挡图像生成模块234用于将图像变换后的上述第二图像中对应上述遮挡物图像的部分与去除上述遮挡物图像后的上述第一图像拼接在一起生成上述无遮挡图像。
在特征点提取模块231中,上述第一特征点和第二特征点可以是相应图像中能够反映物体形状的点,例如图像中的物体的顶点或拐点,可以由该点的强度的梯度变化得到。具体将图像中的哪些点作为特征点,可视需要而定。上述第二特征点与上述第一特征点一一对应,可指上述第二特征点与上述第一特征点为使用者视野中同一点画面所对应的图像上的点。特征点的数量越多越,图像拼接的越准确,一个实施例中,成对的特征点(第一特征点和第二特征点成对出现)至少为5对,这有利于图像变换矩阵结果的正确和稳定。
一个实施例中,可以提取第一图像中除遮挡物图像之外的图像部分中的所有第一特征点,可以提取第二图像中的所有第二特征点,并可记录相应特征点的特征向量和特征值,进一步可通过提取的所有第一特征点和所有第二特征点依据特征向量和特征值进行匹配,并可将匹配成功的第一特征点和第二特征点用于计算上述图像变换矩阵,即最初 提取的第一特征点和第二特征点数量可不同,例如第二特征点数量多于第一特征点数量。
在变换矩阵生成模块232和图像变换模块233中,上述第二图像是从上述图像采集设备的成像视角进行拍摄,上述第一图像是从使用者视角拍摄,该两视角不同时,上述第二图像和上述第一图像中的共同图像部分一般会有相对变形。利用上述图像变换矩阵对该第二图像进行图像变换,可以该第二图像由像采集设备的成像视角转换为使用者视角,可以消除图像变形问题。
在无遮挡图像生成模块234中,可以通过对比上述第一图像和图像变换后的上述第二图像,得到图像变换后的上述第二图像中对应上述遮挡物图像的部分;去除上述第一图像中的遮挡物图像;之后,再将图像变换后的第二图像中对应上述遮挡物图像的部分与去除遮挡物图像后的第一图像拼接在一起。
本实施例中,通过上述图像变换矩阵对上述第二图像进行图像变换,在拼接生成无遮挡图像,以此,可以防止在图像采集设备的成像视角与使用者视野的角度(智能眼镜成像视角)不一致的情况下,被遮挡部分的成像不够直观,造成使用者视野的混乱,从而影响使用者的正常视野的问题。
一个实施例中,上述特征点提取模块231可包括:特征提取算法模块2311。
特征提取算法模块2311用于通过特征提取算法从上述第一图像中的除遮挡物图像之外的图像部分和上述第二图像分别提取多个第一特征点和多个第二特征点。
上述特征提取算法可以提取出拐点,拐点可指与周边其他的像素点变化差异较大的点。举例来说,如果图像上存在一条竖直的边,显然边的左右两边都与边所在的线的像素变化较大,但是这条边不能成为特征点即拐点,因为边是一条线,一条线上有太多的点,无法准确判断其中哪个点可以作为拐点。再例如,如果两条边相交,那么就必然有一个交点,并且这个交点与它周边的像素相比在任何方向上差异都较大。并且这个交点是唯一的一个,它并不会沿着边产生多个拐点,所以,可以选取这个交点作为特征点,也就是拐点。由此可知,特征点一般可以是图像中物体的尖角的位置。上述特征点包括上述第一特征点和第二特征点。
一个实施例中,上述无遮挡图像生成模块234还用于执行:去除上述遮挡物图像后的上述第一图像的边缘的设定像素宽度区域包含部分的上述遮挡物图像。
本实施例中,将遮挡物的图像部分去除,同时保留被遮挡的边缘部分,可以作为图像拼接的依据,提高图像拼接质量。在去除掉遮挡部分时,未遮挡部分可被保留下来, 在图像的拼接时,未遮挡部分与遮挡部分相接的部分图像可在显示的时候做一定的处理使显示看起来更自然。该保留的边缘部分的值可以使用固定的像素,或可以根据相拼接的像素之间的差值来决定宽度。一个实施例中,上述边缘部分的宽度不少于5个像素,以此可以便于图像显示的平滑。
另一个实施例中,上述边缘部分的宽度可以为0~5个像素,或者可将遮挡物的边缘进行扩张操作,使去掉的遮挡物图像比实际的遮挡物图像可大一些,以此,可使拼接后的图像更自然且更平滑。
图14是本发明一实施例中无遮挡图像生成单元的结构示意图。如图14所示,上述无遮挡图像生成单元230还可包括:初始定位模块235,连接于上述图像变换模块233和上述无遮挡图像生成模块234之间。
初始定位模块235用于根据上述图像采集设备的成像角度及空间位置信息,对图像变换后的上述第二图像中对应上述遮挡物图像的部分与去除上述遮挡物图像后的第一图像之间的拼接位置进行初始定位。
在初始定位模块235中,上述图像采集设备与遮挡物的位置较佳地是相对固定的,被遮挡的图像可与图像采集设备采集到的图像存在一定的相对关系。该空间位置信息可以是图像采集设备和遮挡物的相对空间位置,此时,当发现遮挡物的位置后,可以将该相对空间位置的信息记录下来,在此条件下,遮挡物和图像采集设备之间可以认为是不发生相对运动的,所以可以定位到遮挡物的位置信息并利用该位置信息进行图像拼接,例如,某个角度方向上存在遮挡物,那么这个角度方向上的图像采集设备采集到的信息可以用来替换遮挡物图像。如此一来,在使用智能眼镜做图像拼接的操作的时候,拼接过程中需要搜寻对应特征点(第一特征点和对应的第二特征点)的图像的范围可以大大减小,从而可以提高图像处理速度。一个实施例中,同时可以利用智能眼镜本身的陀螺仪提供方向等的信息,可以预先知晓遮挡物与智能眼镜使用者存在的角度关系,也就是说使用者是否朝遮挡物的方向观察了,这可以作为是否显示透视图像或者拼接预算的一个辅助依据。在一实施例中,可同时存在多个图像采集设备的情况下,多个图像采集设备之间的相对位置可以固定,以此进行一次多个图像采集设备的图像拼接以后,只需要保留图像变换矩阵并应用到新的图像即可,无需要每次都对图像进行特征提取和寻找对应特征点的操作,以此可以提高图像处理速度。
一个实施例中,上述遮挡物图像获取单元210可包括:遮挡物图像获取模块211。
遮挡物图像获取模块211用于根据遮挡物的恒定属性或遮挡物上的图形标记,识别上述第一图像中的遮挡物图像。
具体地,例如可根据图形标记所在的边界区域识别遮挡物图像。第一图像在图像处理过程中可提取图像的边界,这样可以将整个第一图像划分为很多区域,恒定属性或者图形标记所在的区域可被认定为遮挡物,该区域的图像可被替换。
本实施例中,该图形标记可以是预先再遮挡物上设置的唯一标记,用以识别该遮挡物。识别第一图像中的遮挡物图像的方法可包括遮挡物的颜色、图案、相对位置或者特征标识等。颜色可指遮挡物的自身颜色相对恒定,可以通过一段时间的采集将遮挡物的颜色限定与某个范围这样对该颜色的物体在作为遮挡物的判定中可以给予一定加权。图案和特征标识本质相同,可是指符合特定图案特征作为识别,识别后可将所在的轮廓识别为遮挡物。相对位置可是采用陀螺仪的方式记录遮挡物和使用者之间的相对关系从而应用与对遮挡物的识别中。对遮挡物的识别的方法可以是物体识别的算法。
通过智能眼镜记录遮挡物体的特征,或者在遮挡物上使用图形标记,或者通过记录遮挡物的空间位置,例如通过智能眼镜陀螺仪记录遮挡物与智能眼镜所成角度,从而可在智能眼镜朝向该角度时识别出该遮挡物。视野死角和遮挡物本身一般不太发生变化,通过对遮挡物本身属性比较恒定这一特点来达到对遮挡物得识别,可以提高遮挡物识别的准确度。例如,遮挡物的颜色比较固定,几乎不发生变化,在自然视野变化时,遮挡物的自身的图像也不变化等特征识别标定出遮挡物或视野死角。自然视野中的遮挡物部分可被记录下来并被移除掉。
一个实施例中,视野内未发现遮挡物,可无需进行图像的替换操作,可以使用智能眼镜摄像头所拍摄的图案。如果是透过式智能眼镜也可以采用不显示的方法直接让使用者透过镜片直接观察而不进行显示,使用者可以透过智能眼镜直接看到自然视野,该智能眼镜可以是透过式的智能眼镜,例如类似googleglass和hololens的智能眼镜。
图15是本发明另一实施例的基于智能眼镜的遮挡物透视装置的结构示意图。如图15所示,图12所示的基于智能眼镜的遮挡物透视装置,还可包括:第一图像更新单元240,与上述无遮挡图像生成单元230连接。
第一图像更新单元240用于在使用者视野内的画面发生变化时,上述智能眼镜更新上述第一图像,所述图像采集设备重新采集所述第二图像以更新所述第二图像,并基于更新后的上述第一图像和更新后的所述第二图像重新生成上述无遮挡图像。
本实施例中,当被遮挡的自然视野发生变化的时候,例如使用者头部发生位移,可以依据智能眼镜自身的陀螺仪数据调整更新上述无遮挡图像。具体地,可以更新上述第一图像和上述第二图像,并基于更新后的第一图像重新生成无遮挡图像和更新后的第二图像。如此一来,可以在使用者视野内的画面发生变化时实现无遮挡图像的实时更新。一个实施例中,上述图像采集设备可实时采集第二图像,可在需要时利用重新采集的第二图像对更新之前的第二图像。
图16是本发明另一实施例的基于智能眼镜的遮挡物透视装置的结构示意图。如图16所示,图12所示的基于智能眼镜的遮挡物透视装置,还可包括:第二图像更新单元250和无遮挡图像更新单元260,二者相互连接,第二图像更新单元250与上述无遮挡图像生成单元230连接。
第二图像更新单元250用于在使用者的视角发生变化时,上述智能眼镜依据上述智能眼镜上的陀螺仪的检测数据和一设定位置的陀螺仪的检测数据确定是否需要更新上述第一图像。
无遮挡图像更新单元260用于若是,通过上述智能眼镜更新上述第一图像,并基于更新后的上述第一图像重新生成上述无遮挡图像。
在第二图像更新单元250中,设定位置可以是智能眼镜外的各个位置,例如车上。根据智能眼镜上的陀螺仪的检测数据和该设定位置的陀螺仪的检测数据可以在各种情形下确定使用者的视角变化情况,例如在车行驶的过程中确定使用者视角是否发生变化。
在无遮挡图像更新单元260中,当根据智能眼镜上的陀螺仪的检测数据和该设定位置的陀螺仪的检测数据判断使用者的视角变化超过一设定角度时,可通过智能眼镜更新该第一图像,并基于更新后的第一图像生成无遮挡图像,以此可以在使用者视角发生变化时,实时更新无遮挡图像。
在其他实施例中,若根据智能眼镜上的陀螺仪的检测数据和该设定位置的陀螺仪的检测数据判断使用者的视角变化不超过上述设定角度,可近似认为使用者视角未发生变化,则可不更新第一图像。
本实施例中,第二图像可以重新采集,也可以不重新采集,只要第二图像中仍可包括被更新后的第一图像中的遮挡物所遮挡的使用者视野即可。一个实施例中,上述图像采集设备可以是摄像机,此时,可以实时采集第二图像,并可实时更新上述第二图像,第二图像可以是一直被重新采集,并可视需要用于生成上述无遮挡物图像。
图17是本发明又一实施例的基于智能眼镜的遮挡物透视装置的结构示意图。如图17所示,图12所示的基于智能眼镜的遮挡物透视装置,还可包括:第三图像更新单元270,与上述无遮挡图像生成单元230连接。
第三图像更新单元270用于在使用者的视角发生变化后,使用另一图像采集设备重新采集上述第二图像,并基于重新采集的上述第二图像重新生成上述无遮挡图像。
本实施例中,该另一图像采集设备是相对于使用者的视角发生变化前用于采集上述第二图像而言的图像采集设备,可以是上述多个图像采集设备中的另一个。该另一图像采集设备可以采集到使用者的新的视角内遮挡物所遮挡的使用者视野的图像。该另一图像采集设备和上述图像采集设备在成像角度和/或位置上可以不同。根据使用者视角的不同,使用不同的图像采集设备采集第二图像,可以保证使用者透视遮挡物的视角更广。
一个实施例中,上述替换图像获取单元220还用于执行:上述遮挡物相对于使用者的方向角与上述图像采集设备的成像视角之差90度。
本实施例中,遮挡物相对于使用者的方向角与上述图像采集设备的成像视角之差小于90度,可以克服由于三维的实际的空间和表达二维信息的图像之间的差异,提高透视结果的越准确度。其他实施例中,遮挡物相对于使用者的方向角与上述图像采集设备的成像视角之差可以是其他角度,例如可根据图像采集设备可采集的视场角确定,例如可以是图像采集设备的镜头视角的1/2。一个实施例中,该视角差可以是30度及以下,以此可获得比较理想的图像。另一实施例中,图像采集设备可采用180度广角镜头,此时该视角差可接近达90度,可通过多个镜头采集的图像拼接得到全景镜头的图像。
图18是本发明一实施例的替换图像获取单元的结构示意图。如图18所示,上述图像采集设备的个数为多个,上述替换图像获取单元220可包括:第三图像获取模块221和第二图像获取模块222,二者相互连接。
第三图像获取模块221用于通过各上述图像采集设备采集使用者视野的第三图像。
第二图像获取模块222用于将各上述第三图像拼接在一起形成无遮挡物的上述第二图像。
在第三图像获取模块221中,各上述图像采集设备的视角和/或位置可以各部相同,可以采集不同的上述第三图像。
在第二图像获取模块222中,通过将各第三图像拼接在一起形成无遮挡物的第二图像,可以在一个图像采集设备不能采集完整的遮挡物遮挡的视野图像时,将各个图像采 集设备采集的部分的遮挡物遮挡的视野图像拼接在一起,构成完整的遮挡物遮挡的视野图像,从而用于替换上述第一图像中的遮挡物图像。
另一实施例中,拼接形成的上述第二图像可比一幅上述第三图像具有更宽广的视野的图像,在使用者视野内的画面发生变化和/或使用者视角发生变化时,只要上述第二图像仍能够包括遮挡物遮挡的视野的图像,可以不用更新上述第二图像,若第一图像发生变化,可直接在该第二图像中找到与更新后的第一图像对应位置的图像即可,以此可以提高图像处理速度。
本发明实施例的基于智能眼镜的遮挡物透视装置,利用智能眼镜的摄像头采集到的使用者视野的图像,通过多种方法分析并识别得到视野中遮挡物的位置和图像,利用外置的图像采集设备采集被遮挡的视野的图像,并将被遮挡的部分用外置探测器采集到的图像进行替换,再将替换图像与使用者视野图像进行配准、拼接,如此一来,使用智能眼镜能够在观察遮挡物的时候能看到被遮挡物遮挡的图像,从而能够产生透视遮挡物的效果,以此可以有效的去除使用者视野内由该遮挡物造成的死角。
在本说明书的描述中,参考术语“一个实施例”、“一个具体实施例”、“一些实施例”、“例如”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产 生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (22)

  1. 一种基于智能眼镜的遮挡物透视方法,其特征在于,包括:
    通过智能眼镜采集使用者视野的第一图像并识别所述第一图像中的遮挡物图像;
    通过至少一图像采集设备采集使用者视野的第二图像;
    利用所述第二图像中对应所述遮挡物图像的部分替换所述第一图像中的遮挡物图像,拼接生成使用者视野的无遮挡图像。
  2. 如权利要求1所述的基于智能眼镜的遮挡物透视方法,其特征在于,利用所述第二图像中对应所述遮挡物图像的部分替换所述第一图像中的遮挡物图像,拼接生成使用者视野的无遮挡图像,包括:
    从所述第一图像中除遮挡物图像之外的图像部分和所述第二图像分别提取多个第一特征点和多个第二特征点,所述第二特征点与所述第一特征点一一对应;
    根据所有所述第一特征点和所有所述第二特征点,计算得到由所述图像采集设备的成像视角转换为使用者视角的图像变换矩阵;
    利用所述图像变换矩阵对所述第二图像进行图像变换;
    将图像变换后的所述第二图像中对应所述遮挡物图像的部分与去除所述遮挡物图像后的所述第一图像拼接在一起生成所述无遮挡图像。
  3. 如权利要求2所述的基于智能眼镜的遮挡物透视方法,其特征在于,从所述第一图像中的除遮挡物图像之外的图像部分和所述第二图像分别提取多个第一特征点和多个第二特征点,包括:
    通过特征提取算法从所述第一图像中的除遮挡物图像之外的图像部分和所述第二图像分别提取多个第一特征点和多个第二特征点。
  4. 如权利要求2所述的基于智能眼镜的遮挡物透视方法,其特征在于,
    去除所述遮挡物图像后的所述第一图像的边缘的设定像素宽度区域包含部分的所述遮挡物图像。
  5. 如权利要求2所述的基于智能眼镜的遮挡物透视方法,其特征在于,在将图像变换后的所述第二图像中对应所述遮挡物图像的部分与去除所述遮挡物图像后的所述第一图像拼接在一起生成所述无遮挡图像之前,还包括:
    根据所述图像采集设备的成像角度及空间位置信息,对图像变换后的所述第二图像中对应所述遮挡物图像的部分与去除所述遮挡物图像后的第一图像之间的拼接位置进行初始定位。
  6. 如权利要求1所述的基于智能眼镜的遮挡物透视方法,其特征在于,通过智能眼镜识别所述第一图像中的遮挡物图像,包括:
    根据遮挡物的恒定属性或遮挡物上的图形标记,识别所述第一图像中的遮挡物图像。
  7. 如权利要求1所述的基于智能眼镜的遮挡物透视方法,其特征在于,还包括:
    在使用者视野内的画面发生变化时,所述智能眼镜更新所述第一图像,所述图像采集设备重新采集所述第二图像以更新所述第二图像,并基于更新后的所述第一图像和更新后的所述第二图像重新生成所述无遮挡图像。
  8. 如权利要求1所述的基于智能眼镜的遮挡物透视方法,其特征在于,还包括:
    在使用者的视角发生变化时,所述智能眼镜依据所述智能眼镜上的陀螺仪的检测数据和一设定位置的陀螺仪的检测数据确定是否需要更新所述第一图像;
    若是,通过所述智能眼镜更新所述第一图像,并基于更新后的所述第一图像重新生成所述无遮挡图像。
  9. 如权利要求1所述的基于智能眼镜的遮挡物透视方法,其特征在于,还包括:
    在使用者的视角发生变化后,使用另一图像采集设备重新采集所述第二图像,并基于重新采集的所述第二图像重新生成所述无遮挡图像。
  10. 如权利要求1所述的基于智能眼镜的遮挡物透视方法,其特征在于,所述遮挡物相对于使用者的方向角与所述图像采集设备的成像视角之差小于90度。
  11. 如权利要求1所述的基于智能眼镜的遮挡物透视方法,其特征在于,所述图像采集设备的个数为多个,通过多个图像采集设备采集使用者视野的第二图像,包括:
    通过各所述图像采集设备采集使用者视野的第三图像;
    将各所述第三图像拼接在一起形成无遮挡物的所述第二图像。
  12. 一种基于智能眼镜的遮挡物透视装置,其特征在于,包括:
    遮挡物图像获取单元,用于通过智能眼镜采集使用者视野的第一图像并识别所述第一图像中的遮挡物图像;
    替换图像获取单元,用于通过至少一图像采集设备采集使用者视野的第二图像;
    无遮挡图像生成单元,用于利用所述第二图像中对应所述遮挡物图像的部分替换所述第一图像中的遮挡物图像,拼接生成使用者视野的无遮挡图像。
  13. 如权利要求12所述的基于智能眼镜的遮挡物透视装置,其特征在于,所述无遮挡图像生成单元包括:
    特征点提取模块,用于从所述第一图像中除遮挡物图像之外的图像部分和所述第二图像分别提取多个第一特征点和多个第二特征点,所述第二特征点与所述第一特征点一一对应;
    变换矩阵生成模块,用于根据所有所述第一特征点和所有所述第二特征点,计算得到由所述图像采集设备的成像视角转换为使用者视角的图像变换矩阵;
    图像变换模块,用于利用所述图像变换矩阵对所述第二图像进行图像变换;
    无遮挡图像生成模块,用于将图像变换后的所述第二图像中对应所述遮挡物图像的部分与去除所述遮挡物图像后的所述第一图像拼接在一起生成所述无遮挡图像。
  14. 如权利要求13所述的基于智能眼镜的遮挡物透视装置,其特征在于,所述特征点提取模块包括:
    特征提取算法模块,用于通过特征提取算法从所述第一图像中的除遮挡物图像之外的图像部分和所述第二图像分别提取多个第一特征点和多个第二特征点。
  15. 如权利要求13所述的基于智能眼镜的遮挡物透视装置,其特征在于,所述无遮挡图像生成模块还用于执行:
    去除所述遮挡物图像后的所述第一图像的边缘的设定像素宽度区域包含部分的所述遮挡物图像。
  16. 如权利要求13所述的基于智能眼镜的遮挡物透视装置,其特征在于,所述无遮挡图像生成单元还包括:
    初始定位模块,用于根据所述图像采集设备的成像角度及空间位置信息,对图像变换后的所述第二图像中对应所述遮挡物图像的部分与去除所述遮挡物图像后的第一图像之间的拼接位置进行初始定位。
  17. 如权利要求12所述的基于智能眼镜的遮挡物透视装置,其特征在于,所述遮挡物图像获取单元包括:
    遮挡物图像获取模块,用于根据遮挡物的恒定属性或遮挡物上的图形标记,识别所述第一图像中的遮挡物图像。
  18. 如权利要求12所述的基于智能眼镜的遮挡物透视装置,其特征在于,还包括:
    第一图像更新单元,用于在使用者视野内的画面发生变化时,所述智能眼镜更新所述第一图像,所述图像采集设备重新采集所述第二图像以更新所述第二图像,并基于更新后的所述第一图像和更新后的所述第二图像重新生成所述无遮挡图像。
  19. 如权利要求12所述的基于智能眼镜的遮挡物透视装置,其特征在于,还包括:
    第二图像更新单元,用于在使用者的视角发生变化时,所述智能眼镜依据所述智能眼镜上的陀螺仪的检测数据和一设定位置的陀螺仪的检测数据确定是否需要更新所述第一图像;
    无遮挡图像更新单元,用于若是,通过所述智能眼镜更新所述第一图像,并基于更新后的所述第一图像重新生成所述无遮挡图像。
  20. 如权利要求12所述的基于智能眼镜的遮挡物透视装置,其特征在于,还包括:
    第三图像更新单元,用于在使用者的视角发生变化后,使用另一图像采集设备重新采集所述第二图像,并基于重新采集的所述第二图像重新生成所述无遮挡图像。
  21. 如权利要求12所述的基于智能眼镜的遮挡物透视装置,其特征在于,所述替换图像获取单元还用于执行:
    所述遮挡物相对于使用者的方向角与所述图像采集设备的成像视角之差小于90度。
  22. 如权利要求12所述的基于智能眼镜的遮挡物透视装置,其特征在于,所述图像采集设备的个数为多个,所述替换图像获取单元包括:
    第三图像获取模块,用于通过各所述图像采集设备采集使用者视野的第三图像;
    第二图像获取模块,用于将各所述第三图像拼接在一起形成无遮挡物的所述第二图像。
PCT/CN2016/084002 2016-05-31 2016-05-31 基于智能眼镜的遮挡物透视方法及装置 WO2017206042A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/084002 WO2017206042A1 (zh) 2016-05-31 2016-05-31 基于智能眼镜的遮挡物透视方法及装置
US16/008,815 US10607414B2 (en) 2016-05-31 2018-06-14 Method and device for seeing through an obstruction based on smart glasses, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/084002 WO2017206042A1 (zh) 2016-05-31 2016-05-31 基于智能眼镜的遮挡物透视方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/008,815 Continuation US10607414B2 (en) 2016-05-31 2018-06-14 Method and device for seeing through an obstruction based on smart glasses, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2017206042A1 true WO2017206042A1 (zh) 2017-12-07

Family

ID=60478329

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/084002 WO2017206042A1 (zh) 2016-05-31 2016-05-31 基于智能眼镜的遮挡物透视方法及装置

Country Status (2)

Country Link
US (1) US10607414B2 (zh)
WO (1) WO2017206042A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111142255A (zh) * 2018-11-02 2020-05-12 成都理想境界科技有限公司 一种ar光学显示模组及显示设备
CN111142256A (zh) * 2018-11-02 2020-05-12 成都理想境界科技有限公司 一种vr光学显示模组及显示设备
US11443443B2 (en) * 2020-07-16 2022-09-13 Siemens Industry Software Inc Method and a data processing system for aligning a first panoramic image and a second panoramic image in a navigation procedure

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109936737B (zh) * 2017-12-15 2021-11-16 京东方科技集团股份有限公司 可穿戴设备的测试方法及系统
CN108803022A (zh) * 2018-02-13 2018-11-13 成都理想境界科技有限公司 单眼大视场近眼显示设备及双目大视场近眼显示设备
JP2022114600A (ja) * 2021-01-27 2022-08-08 キヤノン株式会社 撮像システム、表示装置、端末装置および撮像システムの制御方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000004395A (ja) * 1998-06-15 2000-01-07 Sony Corp ビデオカメラの画像処理装置及びヘッドマウントディスプレイ
JP2008143701A (ja) * 2006-12-13 2008-06-26 Mitsubishi Heavy Ind Ltd 産業車両の視界改善システム及び方法
CN101925930A (zh) * 2008-03-27 2010-12-22 松下电器产业株式会社 死角显示装置
CN103358996A (zh) * 2013-08-13 2013-10-23 吉林大学 汽车a柱透视化车载显示装置
CN104163133A (zh) * 2013-05-16 2014-11-26 福特环球技术公司 使用后视镜位置的后视摄像机系统
CN106056534A (zh) * 2016-05-31 2016-10-26 中国科学院深圳先进技术研究院 基于智能眼镜的遮挡物透视方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9639968B2 (en) * 2014-02-18 2017-05-02 Harman International Industries, Inc. Generating an augmented view of a location of interest
US20150312468A1 (en) * 2014-04-23 2015-10-29 Narvaro Inc. Multi-camera system controlled by head rotation
US9811889B2 (en) * 2014-12-31 2017-11-07 Nokia Technologies Oy Method, apparatus and computer program product for generating unobstructed object views
JP6540572B2 (ja) * 2016-03-29 2019-07-10 ブラザー工業株式会社 表示装置および表示制御方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000004395A (ja) * 1998-06-15 2000-01-07 Sony Corp ビデオカメラの画像処理装置及びヘッドマウントディスプレイ
JP2008143701A (ja) * 2006-12-13 2008-06-26 Mitsubishi Heavy Ind Ltd 産業車両の視界改善システム及び方法
CN101925930A (zh) * 2008-03-27 2010-12-22 松下电器产业株式会社 死角显示装置
CN104163133A (zh) * 2013-05-16 2014-11-26 福特环球技术公司 使用后视镜位置的后视摄像机系统
CN103358996A (zh) * 2013-08-13 2013-10-23 吉林大学 汽车a柱透视化车载显示装置
CN106056534A (zh) * 2016-05-31 2016-10-26 中国科学院深圳先进技术研究院 基于智能眼镜的遮挡物透视方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111142255A (zh) * 2018-11-02 2020-05-12 成都理想境界科技有限公司 一种ar光学显示模组及显示设备
CN111142256A (zh) * 2018-11-02 2020-05-12 成都理想境界科技有限公司 一种vr光学显示模组及显示设备
US11443443B2 (en) * 2020-07-16 2022-09-13 Siemens Industry Software Inc Method and a data processing system for aligning a first panoramic image and a second panoramic image in a navigation procedure

Also Published As

Publication number Publication date
US10607414B2 (en) 2020-03-31
US20180300954A1 (en) 2018-10-18

Similar Documents

Publication Publication Date Title
CN106056534B (zh) 基于智能眼镜的遮挡物透视方法及装置
WO2017206042A1 (zh) 基于智能眼镜的遮挡物透视方法及装置
US20200183492A1 (en) Eye and Head Tracking
US10228763B2 (en) Gaze direction mapping
JP6695503B2 (ja) 車両の運転者の状態を監視するための方法及びシステム
CN107472135B (zh) 图像生成装置、图像生成方法以及记录介质
CN108171673B (zh) 图像处理方法、装置、车载抬头显示系统及车辆
JP4811259B2 (ja) 視線方向推定装置及び視線方向推定方法
JP4692371B2 (ja) 画像処理装置、画像処理方法、画像処理プログラム、および画像処理プログラムを記録した記録媒体、ならびに移動物体検出システム
JP2003015816A (ja) ステレオカメラを使用した顔・視線認識装置
JP2009254525A (ja) 瞳孔検知方法及び装置
JP6043933B2 (ja) 眠気レベルの推定装置、眠気レベルの推定方法および眠気レベルの推定処理プログラム
JP6855872B2 (ja) 顔認識装置
CN112183200A (zh) 一种基于视频图像的眼动追踪方法和系统
CN115223231A (zh) 视线方向检测方法和装置
CN114463832A (zh) 一种基于点云的交通场景视线追踪方法及系统
CN114565531A (zh) 一种图像修复方法、装置、设备和介质
JP2009077022A (ja) 運転支援システム及び車両
KR100651104B1 (ko) 시선기반 컴퓨터 인터페이스 장치 및 방법
CN113283329B (zh) 视线追踪系统、眼动仪、视线追踪方法、设备及介质
EP4322114A1 (en) Projective bisector mirror
US12001746B2 (en) Electronic apparatus, and method for displaying image on display device
US20230244307A1 (en) Visual assistance
KR101578030B1 (ko) 이벤트 발생 장치 및 방법
CN116243800A (zh) 一种基于眼镜式眼动仪的显示器眼动控制方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16903430

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01/04/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16903430

Country of ref document: EP

Kind code of ref document: A1