WO2021103270A1 - 一种vr影像处理方法、装置、vr眼镜及可读存储介质 - Google Patents

一种vr影像处理方法、装置、vr眼镜及可读存储介质 Download PDF

Info

Publication number
WO2021103270A1
WO2021103270A1 PCT/CN2019/130402 CN2019130402W WO2021103270A1 WO 2021103270 A1 WO2021103270 A1 WO 2021103270A1 CN 2019130402 W CN2019130402 W CN 2019130402W WO 2021103270 A1 WO2021103270 A1 WO 2021103270A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
image
viewpoint
area
view
Prior art date
Application number
PCT/CN2019/130402
Other languages
English (en)
French (fr)
Inventor
张向军
姜滨
迟小羽
Original Assignee
歌尔股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔股份有限公司 filed Critical 歌尔股份有限公司
Priority to US17/606,163 priority Critical patent/US11785202B2/en
Publication of WO2021103270A1 publication Critical patent/WO2021103270A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

Definitions

  • This application relates to the field of VR imaging technology, and in particular to a VR image processing method, device, VR glasses, and readable storage medium.
  • the purpose of this application is to provide a VR image processing method, device, VR glasses and readable storage medium, aiming to reduce the amount of image data to be rendered as much as possible on the basis of basically not affecting the three-dimensional sense of the VR image, and then achieve The purpose of shortening time delay, improving frame rate, and reducing dizziness.
  • this application provides a VR image processing method, including:
  • the peripheral area of the left eye view point and the peripheral area of the right eye view point are rendered to obtain the same peripheral image of the view point;
  • the area of the corresponding viewpoint area is reduced, and the area of the peripheral area of the corresponding viewpoint is increased.
  • determining the to-be-selected region according to the positions of the left-eye view angle and the right-eye view angle includes:
  • selecting any point in the to-be-selected area as the peripheral image angle of view includes:
  • the center of the circular candidate area is determined as the peripheral image angle of view.
  • the resolution of the peripheral image of the viewpoint is lower than the resolution of the left-eye viewpoint image and the resolution of the right-eye viewpoint image.
  • the VR image processing method further includes:
  • the left-eye transition image and the right-eye transition image are rendered according to the transition image perspective for the left-eye transition area and the right-eye transition area; wherein the left-eye transition area surrounds the outer boundary of the left-eye viewpoint area, and the The peripheral area of the left-eye viewpoint surrounds the outer boundary of the left-eye transition area, the right-eye transition area surrounds the outer boundary of the right-eye viewpoint area, and the peripheral area of the right-eye viewpoint surrounds the right-eye transition area
  • the resolution of the left-eye transition image and the right-eye transition image are both lower than the resolution of the corresponding viewpoint image, and both are higher than the resolution of the corresponding viewpoint peripheral image.
  • the method further includes:
  • determining the transition image angle of view according to the left eye angle of view and the right eye angle of view includes:
  • a midpoint between the right-eye angle of view and the central angle of view is determined as the first right-eye transitional image angle of view.
  • determining the transition image angle of view according to the left eye angle of view and the right eye angle of view includes:
  • the midpoint of the connection between the left-eye perspective and the right-eye perspective is simultaneously used as a second left-eye transitional image perspective and a second right-eye transitional image perspective.
  • the VR image processing method further includes:
  • A is any pixel in the image of the transition area
  • W is the peripheral image of the viewpoint
  • G is the image of the transition area
  • x is the weight. According to the increase of the edge distance in the transition area according to the A, The weight changes from 1 to 0.
  • the transformed image is adjusted according to the matching feature points, so that the adjusted transformed image and the standard image have more matching feature points.
  • this application also provides a VR image processing device, which includes:
  • the left-eye viewpoint image acquisition unit is used to render the left-eye viewpoint region according to the left-eye viewpoint to obtain the left-eye viewpoint image;
  • the right-eye viewpoint image acquisition unit is used to render the right-eye viewpoint region according to the right-eye viewpoint to obtain the right-eye viewpoint image;
  • a peripheral image view angle selection unit configured to determine a to-be-selected area according to the positions of the left-eye view angle and the right-eye view angle, and select any point in the to-be-selected area as a peripheral image view angle;
  • a viewpoint peripheral image acquisition unit configured to render the peripheral image of the left eye viewpoint and the peripheral area of the right eye viewpoint to obtain the same peripheral image of the viewpoint according to the peripheral image perspective;
  • An image splicing unit for splicing the peripheral image of the viewpoint with the left-eye viewpoint image and the right-eye viewpoint image respectively to obtain a left-eye complete image and a right-eye complete image;
  • the area adjustment unit is used to reduce the area of the corresponding viewpoint area and increase the area of the peripheral area of the corresponding viewpoint when the displacement of the left-eye viewpoint or the right-eye viewpoint within the preset time period is less than the preset displacement.
  • a VR glasses which includes:
  • Memory used to store computer programs
  • the processor is used to implement the steps of the VR image processing method described in the above content when the computer program is executed.
  • the present application also provides a readable storage medium with a computer program stored on the readable storage medium.
  • the computer program When the computer program is called and executed by a processor, it can realize the content described in the above content.
  • the steps of the VR image processing method When the computer program is called and executed by a processor, it can realize the content described in the above content. The steps of the VR image processing method.
  • a VR image processing method provided by the present application includes: rendering a left-eye viewpoint region according to the left-eye perspective to obtain a left-eye viewpoint image; rendering a right-eye viewpoint region according to a right-eye perspective to obtain a right-eye viewpoint image; The angle of view and the position of the right eye angle of view determine the area to be selected, and any point in the area to be selected is selected as the peripheral image angle; rendering the peripheral area of the left eye viewpoint and the peripheral area of the right eye viewpoint according to the peripheral image angle of view Obtaining the same peripheral image of the viewpoint; and stitching the peripheral image of the viewpoint with the left-eye viewpoint image and the right-eye viewpoint image respectively to obtain a left-eye complete image and a right-eye complete image.
  • this application first determines the only peripheral image perspective based on the left-eye perspective and the right-eye perspective, and renders only a unique set of peripheral images of the viewpoint based on the peripheral image perspective. That is to say, the complete images of the left and right eyes are spliced by using different viewpoint images and the same peripheral images of the viewpoint. Since the image in the peripheral area of the viewpoint is relatively far from the viewpoint, the magnitude of the difference hardly affects the creation of the stereoscopic effect and the user's VR viewing experience, so it can significantly reduce the need for rendering without losing the VR viewing experience.
  • the amount of data in turn achieves the purpose of shortening the time delay, increasing the frame rate, and reducing the feeling of dizziness.
  • by monitoring the displacement of the viewpoint it can be judged whether the user is in a static state, and the area of the viewpoint area can be further reduced for the static state, so that the amount of data that needs to be rendered can be further reduced.
  • This application also provides a VR image processing device, VR glasses, and a readable storage medium, which have the above-mentioned beneficial effects, and will not be repeated here.
  • FIG. 1 is a flowchart of a VR image processing method provided by an embodiment of the application
  • FIG. 2 is a schematic diagram of the positions of a viewpoint area and a peripheral area of a viewpoint according to an embodiment of the application;
  • FIG. 3 is a schematic diagram of the positions of another viewpoint area and the peripheral area of the viewpoint provided by an embodiment of the application;
  • FIG. 4 is a schematic diagram of the position of a rectangular candidate area in the VR image processing method provided by an embodiment of the application;
  • FIG. 5 is a schematic diagram of the position of a circular candidate area in the VR image processing method provided by an embodiment of the application;
  • FIG. 6 is a schematic diagram of the position of a preferred peripheral image viewing angle provided in the to-be-selected area shown in FIG. 5 according to an embodiment of the present application;
  • FIG. 7 is a flowchart of setting a transition area and rendering the transition area to obtain a transition image according to an embodiment of the application
  • FIG. 8 is a schematic diagram of the positions of a viewpoint area, a transition area, and a peripheral area of a viewpoint according to an embodiment of the application;
  • FIG. 9 is a schematic diagram of the position of a transitional image view angle provided by an embodiment of the application.
  • FIG. 10 is a flowchart of a method for determining the viewing angle of a transitional image provided by an embodiment of the application.
  • FIG. 11 is a schematic diagram of the position of the transition image viewing angle shown in FIG. 10 provided by an embodiment of the application;
  • FIG. 12 is a schematic diagram of a specific view angle and imaging area provided by an embodiment of the application.
  • FIG. 13 is a structural block diagram of a VR image processing device provided by an embodiment of the application.
  • the purpose of this application is to provide a VR image processing method, device, VR glasses and readable storage medium, aiming to reduce the amount of image data to be rendered as much as possible on the basis of basically not affecting the three-dimensional sense of the VR image, and then achieve The purpose of shortening time delay, improving frame rate, and reducing dizziness.
  • FIG. 1 is a flowchart of a VR image processing method provided by an embodiment of the application, which includes the following steps:
  • S101 Render a left-eye viewpoint area according to the left-eye viewpoint to obtain a left-eye viewpoint image
  • S102 Render the right-eye viewpoint region according to the right-eye viewpoint to obtain a right-eye viewpoint image
  • the left-eye perspective and the right-eye perspective refer to the angles of the line of sight from the left and right eyes respectively, and can basically refer to the positions of the left and right eyes. Since the left and right eyes see things in a certain range of angles, Use the term perspective for addressing.
  • the left-eye viewpoint (right-eye viewpoint) is the gaze point of the visual field in the left-eye perspective (right-eye perspective), and is usually the most central position. Therefore, the surrounding area centered on the viewpoint in the complete field of view is usually divided into viewpoint areas.
  • S101 and S102 divide the left and right eyes, and render the viewpoint image for the corresponding viewpoint area according to the angle of view.
  • the viewpoint area contains the things that the user wants to see clearly, it also brings the user to the VR image.
  • the viewpoint area is usually rendered with high-resolution standards in order to provide users with the best viewing experience. That is, this application still renders the left-eye viewpoint area and the right-eye viewpoint area separately, and the left-eye viewpoint image and the right-eye viewpoint image are obtained respectively, and there are still two groups.
  • S103 Determine a to-be-selected area according to the positions of the left-eye and right-eye angles of view, and select any point in the to-be-selected area as the peripheral image angle of view;
  • the peripheral area of the viewpoint ( Figure 2 and Figure 3 respectively provide two types of viewpoint areas and Schematic diagram of the peripheral area of the viewpoint).
  • the viewpoint area image which is mainly used to create a three-dimensional sense and enhance the viewing effect
  • the peripheral viewpoint image corresponding to the peripheral area of the viewpoint has almost no impact on the creation of the three-dimensional sense and the improvement of the viewing effect. In most cases, it is only in the full field of view. The remaining part of the filling exists as a background.
  • the present application no longer renders the peripheral image of the left eye viewpoint and the peripheral image of the right eye viewpoint according to the left-eye perspective and the right-eye perspective, respectively, according to the left-eye perspective and the right-eye perspective, respectively.
  • the only peripheral image perspective is selected through a suitable method, and based on the unique peripheral image perspective, the peripheral area of the left eye point of view and the peripheral area of the right eye point of view are rendered to obtain the same peripheral image of the point of view, thereby reducing the amount of data that needs to be rendered. the goal of.
  • this step is based on the left-eye viewing angle.
  • the position of the right eye angle of view determines a suitable candidate area, and any point in the candidate area is selected as the peripheral image angle of view. That is to say, the peripheral image perspective is not randomly selected, but the position of the left-eye perspective and the right-eye perspective will be integrated, and the selected one will not have too much impact on the viewing experience, but can also reduce the amount of data that needs to be rendered.
  • the location of the purpose is based on the left-eye viewing angle.
  • the area to be selected is a rectangular area between the left-eye view and the right-eye view.
  • the upper and lower boundaries of the rectangle are both parallel to the line connecting the left-eye view and the right-eye view.
  • the upper boundary and the lower boundary are both at a certain distance from the line.
  • the left boundary and the right boundary of the rectangle connect the left and right ends of the upper boundary and the lower boundary respectively.
  • the intersection of the diagonals of the rectangle is the line The center point.
  • the rectangular candidate area shown in Figure 4 includes all points on the line and all parallel lines up and down from the line at a certain distance, and it can be determined that any of these points All of them are in the middle area between the left eye view and the right eye view. Therefore, based on this feature, when used as the peripheral image view, the corresponding image can include part of the left eye image and the right eye image at the same time, so rendering The obtained peripheral images of the viewpoint will not seriously affect the viewing experience.
  • a circular candidate area with the same properties is obtained by making a circle with the center of the line as the center of the circle.
  • the circular candidate area can be obtained through the following steps:
  • connection line Take the midpoint of the connection line as the center of the circle, and make a circle with a preset radius to obtain a circular candidate area; where the preset radius does not exceed half the length of the connection line.
  • the center of the circle can be preferably used as the preferred peripheral image viewing angle (as shown in Figure 6), because compared to the circular waiting area
  • the preferred peripheral image view angle is at the same level as the left eye view angle and the right eye view angle, so there will be no difference in image content due to the height difference, and the preferred peripheral image view angle is located between the left eye view angle and the right eye view angle.
  • the image content will contain the same proportions of left-eye and right-eye images. The difference between the two-eye images will be better balanced, with minimal impact on the viewing experience.
  • S104 Rendering the peripheral area of the left eye view point and the peripheral area of the right eye view point to obtain the same peripheral image of the viewpoint according to the peripheral image perspective;
  • this step aims at rendering the peripheral area of the left eye view point and the peripheral area of the right eye view point to obtain the same peripheral image of the viewpoint according to the peripheral image perspective.
  • the present application determines a unified peripheral image perspective, so the peripheral area of the left eye view point and the peripheral area of the right eye view point are rendered to obtain the same peripheral image of the view point.
  • this application can effectively reduce the amount of data that needs to be rendered through the above solution, and Due to the position of the peripheral image of the viewpoint within the entire field of view, it has little impact on the creation of a three-dimensional sense and the improvement of the viewing experience, so the solution of the present application will hardly affect the user's VR viewing experience.
  • S105 Splicing the peripheral images of the viewpoint with the left-eye viewpoint image and the right-eye viewpoint image respectively to obtain a complete left-eye image and a complete right-eye image.
  • this embodiment first determines the unique peripheral image angle based on the left eye angle of view and the right eye angle of view, and only renders a unique set of peripheral images of the viewpoint based on the peripheral image angle of view. That is to say, the complete images of the left and right eyes are spliced by using different viewpoint images and the same viewpoint peripheral images. Since the image in the peripheral area of the viewpoint is relatively far away from the viewpoint, the size of the difference hardly affects the creation of the stereoscopic effect, and also hardly affects the user's VR viewing experience. Therefore, the VR viewing experience can be basically not lost. Significantly reduce the amount of data that needs to be rendered, thereby achieving the purpose of shortening the time delay, increasing the frame rate, and reducing the feeling of dizziness.
  • the present application selects a unified peripheral image angle of view, only one set of peripheral images of the viewpoint needs to be rendered, which reduces the amount of data that needs to be rendered. Therefore, even if the peripheral image of the viewpoint is rendered with the same high resolution as the viewpoint image, the amount of data can still be effectively reduced compared to the solution in the prior art that requires two sets of rendering. Of course, this application can further reduce the amount of data by reducing the resolution of the peripheral image of the viewpoint on the basis of the above solution.
  • An implementation including but not limited to the flow chart shown in Fig. 7 includes the following steps:
  • S201 Determine the transition image angle of view according to the left eye angle of view and the right eye angle of view;
  • S202 Render the left-eye transition area and the right-eye transition area to obtain the left-eye transition image and the right-eye transition image according to the transition image perspective.
  • the left-eye transition area surrounds the outer boundary of the left-eye viewpoint area
  • the left-eye viewpoint peripheral area surrounds the outer boundary of the left-eye transition area
  • the right-eye transition area surrounds the outer boundary of the right-eye viewpoint area
  • the peripheral area of the right eye viewpoint surrounds the outer boundary of the right eye transition area.
  • the schematic diagram shown in FIG. 2 can be updated to the schematic diagram shown in FIG. 8.
  • the role of the transition area is to connect the viewpoint images with different viewing angles and the peripheral images of the viewpoint. Therefore, the resolution of the left-eye transition image and the right-eye transition image should be lower than the resolution of the corresponding viewpoint images. However, they are all higher than the resolution of the peripheral image of the corresponding viewpoint.
  • the first type selects to obtain a unified transition image perspective.
  • a unified peripheral image realization based on the left-eye and right-eye perspectives, but for the purpose of convergence
  • the role of viewpoint images with different viewing angles and peripheral images of the viewpoint not only requires the resolution of the transition image to be greater than the resolution of the peripheral image of the viewpoint, but because the viewing angle is still different, image processing methods such as weight change method or feature point fitting method are also required. Optimize the splicing of different areas to make them better connect.
  • a preferred method can be seen in the schematic diagram shown in Fig. 9, that is, a unified transition image perspective is selected at the center of the connection between the left-eye perspective and the right-eye perspective, that is, the left-eye transition image perspective and the right-eye transition image perspective are at the same position.
  • the use of weight change method and feature point fitting method is to solve the problem of image sudden change in the transition area.
  • a certain overlap fitting process is required to minimize It is possible to reduce the display abnormality caused by the sudden change of the image, that is to say, the rendering range of the viewpoint area, the transition area and the peripheral area of the viewpoint must overlap to a certain extent, so that the two fitting operations can be performed on the overlapping area, and then the final area image.
  • A is any pixel in the image of the transition area
  • W is the peripheral image of the viewpoint
  • G is the image of the transition area
  • x is the weight. According to the increase of the edge distance in the transition area according to A, the weight gradually changes from 1 to 0 .
  • the transition area is from the inner edge to the outer edge.
  • the weight changes from 1 to 0 (floating point value).
  • the weight value is changed from 0 to 1 (floating point value)
  • the two images are synthesized by floating point arithmetic, and the two images are fitted together as the actual display image of the transition area.
  • the transformed image is adjusted according to the matching feature points, so that the adjusted transformed image and the standard image have more matching feature points.
  • the general idea of the feature point fitting method is: select one image as the standard image (here, the viewpoint effect is selected as the standard image), and the other image is used as the transformed image.
  • the standard image and the transformed image are first preprocessed (such as histogram, After binarization, filtering, etc.), feature scanning and extraction (such as edge detection, grayscale detection) are performed.
  • Feature points can be determined by corners, edges and contours.
  • the feature points in the two images Perform matching detection. For the feature points that can be matched, determine the location of the feature point, leave the location of the feature point unchanged, and perform image synthesis through operations such as image rotation and interpolation, and enable more feature points in the two images to overlap. That is, more feature points are matched after adjustment.
  • the second category does not use the selection method of peripheral image viewpoints, but selects the left-eye transition area and the right-eye transition area respectively to obtain a closer left-eye transition image view angle and right-eye transition image view angle.
  • This method is due to The angle of view of the transitional image is selected as close as possible to the left eye point of view and the right eye point of view, so there is no need to perform overlap fitting processing according to the first type of situation.
  • the second category adopts a solution to make the transition image viewing angle as close as possible to the left and right eye viewing angles to eliminate as much as possible the difference in viewing angles caused by different areas of images. The phenomenon of sudden changes in the image.
  • S301 Determine the midpoint of the line connecting the left-eye perspective and the right-eye perspective as the central perspective
  • S302 Determine the midpoint between the left eye angle of view and the center angle of view as the left eye transition image angle of view;
  • S303 Determine a midpoint between the right eye angle of view and the center angle of view as the right eye transition image angle of view.
  • the transitional image perspective selection method shown in Figure 10 is neither directly selecting the left-eye perspective as the left-eye transitional image perspective and the right-eye perspective as the right-eye transitional image perspective, nor directly selecting the left-eye perspective and the right-eye perspective.
  • the midpoint of the angle of view is selected as the unified transitional image angle, but a further compromise, the midpoint of the left-eye angle of view and the central angle of view is determined as the left-eye transitional image angle, and the midpoint of the right-eye angle of view and the central angle of view is determined as the right eye Transitional image viewing angle.
  • the left-eye transitional image viewing angle is closer to the left-eye viewing angle
  • the right-eye transitional image viewing angle is closer to the right-eye viewing angle, and to the first extent, it is weakened due to the inconsistency of viewing angles.
  • the phenomenon of sudden changes in the image can also reduce the amount of extra calculations caused by image overlap fitting processing.
  • a schematic diagram of this approach can be seen in Figure 12.
  • the displacement is used to determine whether the user's head is in motion and when it is determined to be in a static state, the reduced area can be changed from the viewpoint area to the transition area, thereby reducing the amount of data that needs to be rendered At the same time, it further guarantees the user's viewing experience brought by the image of the viewpoint area; it can also reduce the area of the transition area on the basis of reducing the area of the viewpoint area, thereby further reducing the amount of data that needs to be rendered.
  • FIG. 13 is a structural block diagram of a VR image processing device provided by an embodiment of the application.
  • the device may include:
  • the left-eye viewpoint image obtaining unit 100 is configured to render a left-eye viewpoint region according to the left-eye viewpoint to obtain a left-eye viewpoint image;
  • the right-eye viewpoint image acquiring unit 200 is configured to render the right-eye viewpoint region according to the right-eye viewpoint to obtain the right-eye viewpoint image;
  • the peripheral image angle of view selection unit 300 is configured to determine the area to be selected according to the positions of the left eye angle of view and the right eye angle of view, and select any point in the area to be selected as the peripheral image angle of view;
  • the viewpoint peripheral image acquisition unit 400 is configured to render the peripheral area of the left eye viewpoint and the peripheral area of the right eye viewpoint to obtain the same peripheral image of the viewpoint according to the peripheral image perspective;
  • the image stitching unit 500 is used for stitching the peripheral images of the viewpoint with the left-eye viewpoint image and the right-eye viewpoint image respectively to obtain the left-eye complete image and the right-eye complete image;
  • the area adjustment unit 600 is configured to reduce the area of the corresponding viewpoint area and increase the area of the peripheral area of the corresponding viewpoint when the displacement of the left-eye viewpoint or the right-eye viewpoint within the preset time period is less than the preset displacement.
  • the peripheral image viewing angle selection unit 300 may include:
  • connection sub-unit is used to make the connection between the left-eye view and the right-eye view
  • the image to-be-selected area acquisition sub-unit is used to take the midpoint of the line as the center of the circle and make a circle with a preset radius to obtain a circular to-be-selected area; wherein the preset radius does not exceed half the length of the line.
  • the peripheral image viewing angle selection unit 300 may include:
  • the peripheral image viewing angle optimization subunit is used to determine the center of the circular candidate area as the peripheral image viewing angle when the to-be-selected area is a circular to-be-selected area.
  • the VR image processing device may further include:
  • the transition image viewing angle determining unit is used to determine the transition image viewing angle according to the left eye angle of view and the right eye angle of view;
  • the transition image rendering unit is used to render the left-eye transition area and the right-eye transition area to obtain the left-eye transition image and the right-eye transition image according to the transition image perspective; wherein the left-eye transition area surrounds the outer boundary of the left-eye viewpoint area and the left-eye transition area.
  • the peripheral area of the eye view surrounds the outer boundary of the left eye transition area
  • the right eye transition area surrounds the outer boundary of the right eye view area
  • the right eye peripheral area surrounds the outer boundary of the right eye transition area
  • left eye transition image and right eye The resolution of the transition image is lower than the resolution of the corresponding viewpoint image, and both are higher than the resolution of the peripheral image of the corresponding viewpoint.
  • the VR image processing device may further include:
  • the area readjustment unit is used to reduce the area of the corresponding transition area and increase the area of the peripheral area of the corresponding view point when the displacement of the left eye point of view or the right eye point of view within the preset time period is less than the preset displacement.
  • the transitional image viewpoint determining unit may include:
  • the central viewing angle determining subunit is used to determine the midpoint of the line connecting the left eye viewing angle and the right eye viewing angle as the central viewing angle;
  • the first left-eye transitional image viewing angle determining subunit is used to determine the midpoint of the left-eye viewing angle and the central viewing angle as the first left-eye transitional image viewing angle;
  • the first right-eye transitional image viewing angle determining subunit is used to determine the midpoint of the right-eye viewing angle and the central viewing angle as the first right-eye transitional image viewing angle.
  • the transitional image viewing angle determining unit may include:
  • the VR image processing device further includes:
  • the image optimization processing unit is used to perform image optimization processing on the splicing of images in different regions by using the weight change method or the feature point fitting method.
  • the image optimization processing unit includes a weight change method processing subunit
  • the weight change method processing subunit includes:
  • the formula calculation module is used to calculate the image parameters of each pixel in the image of the transition area according to the following formula:
  • A is any pixel in the image of the transition area
  • W is the peripheral image of the viewpoint
  • G is the image of the transition area
  • x is the weight. According to the increase of the edge distance in the transition area according to A, the weight gradually changes from 1 to 0 .
  • the image optimization processing unit includes a feature point fitting method processing subunit
  • the feature point fitting method processing subunit includes:
  • the stitching coincidence area determination module is used to determine the stitching and coincidence area of the viewpoint image and the transition image
  • the standard influence selection module is used to use the viewpoint image in the stitching and overlapping area as the standard image
  • the transformation influence selection module is used to use the transition image in the splicing and overlapping area as the transformation image
  • the matching feature point extraction module is used to extract matching feature points from standard images and transformed images
  • the adjustment module based on matching feature points is used to adjust the transformed image according to the matching feature points, so that the adjusted transformed image and the standard image have more matching feature points.
  • This embodiment exists as a device embodiment corresponding to the above method embodiment, and has all the beneficial effects of the method embodiment, and will not be repeated here.
  • the present application also provides a VR glasses
  • the VR glasses may include a memory and a processor, wherein the memory stores a computer program, the processor can call the computer program in the memory, The steps of the VR image processing method provided by the above-mentioned embodiments are implemented.
  • the VR glasses can also include various necessary network interfaces, power supplies, and other components.
  • the present application also provides a readable storage medium on which a computer program is stored.
  • the storage medium may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种VR影像处理方法,根据左眼视角和右眼视角确定出的唯一的外围影像视角,并基于该外围影像视角仅渲染得到唯一的一组视点外围影像。由于视点外围区域的影像距视点相对较远,其差异的大小几乎不影响立体感的营造和用户的VR观影体验,因此可以在基本不损失VR观影体验的基础上,显著减少需要渲染的数据量,进而实现缩短时延、提升帧率、减轻眩晕感的目的。同时,通过对视点位移大小的监控可判断出用户是否处于静态状态,并针对静止状态进一步缩小视点区域的面积,使得需要渲染的数据量得以进一步减少。

Description

一种VR影像处理方法、装置、VR眼镜及可读存储介质
本申请要求于2019年11月28日提交中国专利局、申请号为201911191691.7、发明名称为“一种VR影像处理方法、装置、VR眼镜及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及VR成像技术领域,特别涉及一种VR影像处理方法、装置、VR眼镜及可读存储介质。
背景技术
现有VR影像为营造视觉上的立体感,需要分别对左眼和右眼分别渲染一组影像,但由于两组图像同时渲染所需的计算量较大,在现有VR设备的计算力十分有限的情况下,就意味着较长的时延和较低的帧率,并由此给佩戴者带来眩晕感。
因此,如何在基本不影响立体感的基础上,尽可能的减少要进行渲染的影像数据量,进而缩短时延、提升帧率、减轻眩晕感,是本领域技术人员亟待解决的问题。
发明内容
本申请的目的是提供一种VR影像处理方法、装置、VR眼镜及可读存储介质,旨在基本不影响VR影像立体感的基础上,尽可能的减少要进行渲染的影像数据量,进而实现缩短时延、提升帧率、减轻眩晕感的目的。
为实现上述目的,本申请提供了一种VR影像处理方法,包括:
根据左眼视角为左眼视点区域渲染得到左眼视点影像;
根据右眼视角为右眼视点区域渲染得到右眼视点影像;
根据所述左眼视角和所述右眼视角的位置确定待选区域,并将所述待选区域中的任意一点选为外围影像视角;
根据所述外围影像视角为左眼视点外围区域和右眼视点外围区域渲染得 到相同的视点外围影像;
将所述视点外围影像分别与所述左眼视点影像和所述右眼视点影像进行拼接,得到左眼完整影像和右眼完整影像;
当左眼视点或右眼视点在预设时长内的位移小于预设位移时,缩小相应视点区域的面积、增大相应视点外围区域的面积。
可选的,根据所述左眼视角和所述右眼视角的位置确定待选区域,包括:
做所述左眼视角和所述右眼视角之间的连线;
将所述连线的中点作为圆心,以预设半径做圆,得到圆形待选区域;其中,所述预设半径不超过所述连线长度的一半。
可选的,当所述待选区域为所述圆形待选区域时,将所述待选区域中的任意一点选为外围影像视角,包括:
将所述圆形待选区域的圆心确定为所述外围影像视角。
可选的,所述视点外围影像的分辨率低于所述左眼视点影像和所述右眼视点影像的分辨率。
可选的,该VR影像处理方法还包括:
根据所述左眼视角和所述右眼视角确定过渡影像视角;
根据所述过渡影像视角为左眼过渡区域和右眼过渡区域渲染得到左眼过渡影像和右眼过渡影像;其中,所述左眼过渡区域环绕于所述左眼视点区域的外边界、所述左眼视点外围区域环绕于所述左眼过渡区域的外边界,所述右眼过渡区域环绕于所述右眼视点区域的外边界、所述右眼视点外围区域环绕于所述右眼过渡区域的外边界;所述左眼过渡影像和所述右眼过渡影像的分辨率均低于相应视点影像的分辨率、均高于相应视点外围影像的分辨率。
可选的,当左眼视点或右眼视点在预设时长内的位移小于预设位移时,还包括:
缩小相应过渡区域的面积、增大相应视点外围区域的面积。
可选的,根据所述左眼视角和所述右眼视角确定过渡影像视角,包括:
将所述左眼视角与所述右眼视角连线的中点确定为中心视角;
将所述左眼视角与所述中心视角的中点确定为第一左眼过渡影像视角;
将所述右眼视角与所述中心视角的中点确定为第一右眼过渡影像视角。
可选的,根据所述左眼视角和所述右眼视角确定过渡影像视角,包括:
将所述左眼视角与所述右眼视角连线的中点同时作为第二左眼过渡影像视角和第二右眼过渡影像视角。
可选的,该VR影像处理方法还包括:
利用权值变化法或特征点拟合法对不同区域的影像的拼接处进行影像优化处理。
可选的,利用权值变化法对不同区域的影像的拼接处进行影像优化,包括:
按照下述公式计算得到过渡区域的影像中每一像素点的图像参数:
A=W×x+G×(1-x);
其中,A为过渡区域的影像中的任何一个像素点,W为所述视点外围影像,G为过渡区域影像,x为权值,根据所述A随所述过渡区域内边缘距离的增加,所述权值从1渐变为0。
可选的,利用特征点拟合法对不同区域的影像的拼接处进行影像优化处理,包括:
确定视点影像与过渡影像的拼接重合区域;
将处于所述拼接重合区域的视点影像作为标准影像;
将处于所述拼接重合区域的过渡影像作为变换影像;
从所述标准影像和所述变换影像中提取得到匹配特征点;
根据所述匹配特征点对所述变换影像进行调整,以使得到的调整后变换影像与所述标准影像拥有更多的匹配特征点。
为实现上述目的,本申请还提供了一种VR影像处理装置,该装置包括:
左眼视点影像获取单元,用于根据左眼视角为左眼视点区域渲染得到左眼视点影像;
右眼视点影像获取单元,用于根据右眼视角为右眼视点区域渲染得到右眼视点影像;
外围影像视角选取单元,用于根据所述左眼视角和所述右眼视角的位置确定待选区域,并将所述待选区域中的任意一点选为外围影像视角;
视点外围影像获取单元,用于根据所述外围影像视角为左眼视点外围区域和右眼视点外围区域渲染得到相同的视点外围影像;
影像拼接单元,用于将所述视点外围影像分别与所述左眼视点影像和所 述右眼视点影像进行拼接,得到左眼完整影像和右眼完整影像;
面积调整单元,用于当左眼视点或右眼视点在预设时长内的位移小于预设位移时,缩小相应视点区域的面积、增大相应视点外围区域的面积。
为实现上述目的,本申请提供了一种VR眼镜,该VR眼镜包括:
存储器,用于存储计算机程序;
处理器,用于在执行所述计算机程序时,可实现如上述内容所描述的VR影像处理方法的各步骤。
为实现上述目的,本申请还提供了一种可读存储介质,所述可读存储介质上存储有计算机程序,所述计算机程序在被处理器调用并执行时,可实现如上述内容所描述的VR影像处理方法的各步骤。
本申请提供的一种VR影像处理方法,包括:根据左眼视角为左眼视点区域渲染得到左眼视点影像;根据右眼视角为右眼视点区域渲染得到右眼视点影像;根据所述左眼视角和所述右眼视角的位置确定待选区域,并将所述待选区域中的任意一点选为外围影像视角;根据所述外围影像视角为左眼视点外围区域和右眼视点外围区域渲染得到相同的视点外围影像;将所述视点外围影像分别与所述左眼视点影像和所述右眼视点影像进行拼接,得到左眼完整影像和右眼完整影像。
显然,区别于现有技术,本申请针对视点外围区域,首先根据左眼视角和右眼视角确定出的唯一的外围影像视角,并基于该外围影像视角仅渲染得到唯一的一组视点外围影像,也就是说左右眼各自的完整影像是使用了不同的视点影像和相同的视点外围影像拼接得到。由于视点外围区域的影像距视点相对较远,其差异的大小几乎不影响立体感的营造和用户的VR观影体验,因此可以在基本不损失VR观影体验的基础上,显著减少需要渲染的数据量,进而实现缩短时延、提升帧率、减轻眩晕感的目的。同时,通过对视点位移大小的监控可判断出用户是否处于静态状态,并针对静止状态可进一步缩小视点区域的面积,使得需要渲染的数据量得以进一步减少。
本申请同时还提供了一种VR影像处理装置、VR眼镜及可读存储介质,具有上述有益效果,在此不再赘述。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本申请实施例提供的一种VR影像处理方法的流程图;
图2为本申请实施例提供的一种视点区域与视点外围区域的位置示意图;
图3为本申请实施例提供的另一种视点区域与视点外围区域的位置示意图;
图4为本申请实施例提供的VR影像处理方法中一种矩形待选区域的位置示意图;
图5为本申请实施例提供的VR影像处理方法中一种圆形待选区域的位置示意图;
图6为本申请实施例在图5所示的待选区域中提供的一种优选外围影像视角的位置示意图;
图7为本申请实施例提供的一种设置过渡区域及为过渡区域渲染得到过渡影像的流程图;
图8为本申请实施例提供的一种视点区域、过渡区域以及视点外围区域的位置示意图;
图9为本申请实施例提供的一种过渡影像视角的位置示意图;
图10为本申请实施例提供的一种过渡影像视角的确定方法的流程图;
图11为本申请实施例提供的与图10所示的过渡影像视角的位置示意图;
图12为本申请实施例提供的一种具体的视角与成像区域的示意图;
图13为本申请实施例提供的一种VR影像处理装置的结构框图。
具体实施方式
本申请的目的是提供一种VR影像处理方法、装置、VR眼镜及可读存储介质,旨在基本不影响VR影像立体感的基础上,尽可能的减少要进行渲染的影像数据量,进而实现缩短时延、提升帧率、减轻眩晕感的目的。
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述, 显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
请参见图1,图1为本申请实施例提供的一种VR影像处理方法的流程图,其包括以下步骤:
S101:根据左眼视角为左眼视点区域渲染得到左眼视点影像;
S102:根据右眼视角为右眼视点区域渲染得到右眼视点影像;
其中,左眼视角和右眼视角分别指从左眼和右眼发出的视线的角度,基本上可以指代左眼和右眼所在的位置,由于左右眼看到的是一定角度范围的事物,因此使用视角一词进行称呼。左眼视点(右眼视点)则是左眼视角(右眼视角)内的视野的注视点,通常为最中心位置。因此通常将完整视野中以视点为中心的附近区域划分为视点区域。
因此,S101和S102分左右眼,根据视角为相应的视点区域渲染得到视点影像,无论左眼还是右眼,由于视点区域中包含了用户最想要看清的事物,也是带给用户VR影像观看体验最重要的部分,因此视点区域通常都采用高分辨率的标准进行渲染,以力求为用户带来最佳的观看体验。即本申请仍是分别为左眼视点区域和右眼视点区域进行渲染,分别得到了左眼视点影像和右眼视点影像,仍是两组。
S103:根据左眼视角和右眼视角的位置确定待选区域,并将待选区域中的任意一点选为外围影像视角;
由于S101和S102完成了视点区域影像的渲染,而完整视野中除视点区域之外的区域则通常被称为视点外围区域(图2和图3分别根据完整视野的形状提供了两种视点区域和视点外围区域的示意图)。相比于主要用于营造立体感和提升观影效果的视点区域影像,视点外围区域对应的外围视点影像,对立体感的营造和观影效果的提升几乎没什么影响,绝大多数情况只是完整视野中剩余部分的填充,作为背景存在。
因此,本申请根据这一情况,不再如现有技术根据左眼视角和右眼视角分别为左眼视点外围区域和右眼视点外围区域渲染得到左眼视点外围影像和右眼视点外围影像,而是通过合适的方式选取得到唯一的外围影像视角,并 基于唯一的外围影像视角为左眼视点外围区域和右眼视点外围区域渲染得到相同的视点外围影像,以此实现减少需要渲染的数据量的目的。
虽然不再像现有技术以左眼视角和右眼视角作为过渡影像视角,但为了不让左右眼影像在视点外围区域的部分相差太大进而严重影响观影体验,本步骤是根据左眼视角和右眼视角的位置确定出一个合适的待选区域,并将该待选区域中的任意一点选为外围影像视角。即该外围影像视角不是随便选的,而是将综合左眼视角和右眼视角的位置,选取得到了一个既不会对观影体验造成太大影响,又能够实现减少需要渲染的数据量的目的的位置。
具体的,针对待选区域具体如何才能实现本申请的目的,本申请还通过图4和图5提供了两种待选区域的合理表现形式:
如图4所示,该待选区域是一个介于左眼视角与右眼视角之间的矩形区域,该矩形的上边界和下边界均与左眼视角与右眼视角的连线平行,该上边界和该下边界均距该连线一定的距离,该矩形的左边界和右边界分别连接该上边界和该下边界的最左端和最右端,该矩形的对角线交点为该连线的中心点。简单来说,图4所示的矩形待选区域将该连线上和与该连线上下距一定距离的所有平行线上的点都包括在内,而可以确定的是,这些点中的任意一点都处于左眼视角和右眼视角中间区域内,因此可以基于这一特性使其在作为外围影像视角时,其对应的影像能够同时将部分左眼影像和右眼影像包括在内,所以渲染得到的视点外围影像不会严重影响观影体验。
同理,图5则是通过将连线中的中心作为圆心做圆得到了一个拥有同样性质的圆形待选区域,具体的,可通过下述步骤来得到该圆形待选区域:
做左眼视角和右眼视角之间的连线;
将连线的中点作为圆心,以预设半径做圆,得到圆形待选区域;其中,预设半径不超过连线长度的一半。
在按照上述步骤得到了如图5所示的圆形待选区域之后,在选择外围影像视角时,可优选的将圆心作为优选外围影像视角(如图6所示),因为相比圆形待选区域中的其它点,该优选外围影像视角与左眼视角和右眼视角处于同一水平高度,所以不会存在由于高度差导致的影像内容差异,且该优选外围影像视角位于左眼视角和右眼视角连线的中心,影像内容将包含同样比例的左眼影像和右眼影像,两眼影像的差异将得以更好的平衡,对观影体验的影 响最小。
S104:根据外围影像视角为左眼视点外围区域和右眼视点外围区域渲染得到相同的视点外围影像;
在S103的基础上,本步骤旨在根据外围影像视角为左眼视点外围区域和右眼视点外围区域渲染得到相同的视点外围影像。简单的来说,相比于现有技术,本申请确定了一个统一的外围影像视角,因此为左眼视点外围区域和右眼视点外围区域渲染得到了相同的视点外围影像。由于无需分别基于左眼视角和右眼视角分别对左眼视点外围区域和右眼视点外围区域渲染得到两组不同的视点外围影像,所以本申请通过上述方案可有效减少需要渲染的数据量,且由于视点外围影像在整个视野内的位置,其对立体感的营造和观影体验的提升影响较小,因此本申请的方案几乎不会对用户的VR观影体验造成影响。
S105:将视点外围影像分别与左眼视点影像和右眼视点影像进行拼接,得到左眼完整影像和右眼完整影像。
在S104的基础上,本步骤则通过将相同的视点外围影像分别与左眼视点影像和右眼视点影像片接在一起,从而得到左眼完整影像和右眼完整影像;
S106:当左眼视点或右眼视点在预设时长内的位移小于预设位移时,缩小相应视点区域的面积、增大相应视点外围区域的面积。
用户在使用VR眼镜进行VR观影时,其头部不免会发生运动,因此可以根据基于视点在一定时间内发生位移的大小来判断当前是否处于运动状态,因为一旦根据位移判定出当前处于静止状态,说明用户对视点内容的注视程度较高,此种状态下可通过适当缩小视点区域的面积、增加视点外围区域的面积的方式来进一步减少需要渲染的数据量。
基于上述技术方案可知,本实施例针对视点外围区域,首先根据左眼视角和右眼视角确定出的唯一的外围影像视角,并基于该外围影像视角仅渲染得到唯一的一组视点外围影像,也就是说左右眼各自的完整影像是使用了不同的视点影像和相同的视点外围影像拼接得到。由于视点外围区域的影像距视点相对较远,其差异的大小几乎不影响立体感的营造,也就几乎不影响用户的VR观影体验,因此可以在基本不损失VR观影体验的基础上,显著减少需要渲染的数据量,进而实现缩短时延、提升帧率、减轻眩晕感的目的。
进一步的,由于本申请是通过选用统一的外围影像视角的方式,使得仅需渲染得到一组视点外围影像,减少了需要渲染的数据量。因此,即使视点外围影像选用与视点影像相同的高分辨率进行渲染,相比于现有技术需要渲染两组的方案仍能够有效减少数据量。当然,本申请还可以在上述方案的基础上,再通过降低视点外围影像的分辨率的方式来进一步减少数据量。
但无论视点外围影像是高分辨率、中分辨率还是低分辨率,由于外围影像视点的选择方式,仍会使得构成完整影像的视点部分和视点外围部分存在的些许剥离感或割裂感,这是由于视角的不统一导致的,因此毕竟在不同视角看到的视野和内容都会存在差异。这一剥离感或割裂感将会随着视点外围影像的分辨率从高到低越发明显,因此为了尽可能的消除视点外围影像在中分辨率和低分辨率下带来的较为明显的剥离感或割裂感,本申请还进一步的提供了在视点区域和视点外围区域增设过渡区域的方式的解决方案。
一种包括但不限于的实现方式可参见如图7所示的流程图,包括如下步骤:
S201:根据左眼视角和右眼视角确定过渡影像视角;
S202:根据过渡影像视角为左眼过渡区域和右眼过渡区域渲染得到左眼过渡影像和右眼过渡影像。
其中,该左眼过渡区域环绕于该左眼视点区域的外边界、该左眼视点外围区域环绕于该左眼过渡区域的外边界,该右眼过渡区域环绕于该右眼视点区域的外边界、该右眼视点外围区域环绕于该右眼过渡区域的外边界。
在增设该过渡区域后,可将如图2所示的示意图更新为如图8所示的示意图。需要说明的是,该过渡区域的作用是为视角不同的视点影像和视点外围影像起到衔接作用,因此该左眼过渡影像和该右眼过渡影像的分辨率应均低于相应视点影像的分辨率,但均高于相应视点外围影像的分辨率。
以下,将根据该过渡影像视点将如何根据左眼视角和右眼视角确定得到展开详细论述:
第一类,如外围影像视角一样,选取得到一个统一的过渡影像视角,这类的实现部分可以参见如何根据左眼视角和右眼视角确定出统一的外围影像实现的说明,但为了起到衔接视角不同的视点影像和视点外围影像的作用, 不仅需要过渡影像的分辨率要大于视点外围影像的分辨率,由于视角仍然不同,还需要通过诸如权值变化法或特征点拟合法等图像处理方法对不同区域的拼接处进行优化处理,以使其更好的衔接。一种优选的方式可参见如9所示的示意图,即将左眼视角和右眼视角连线的中心选择统一的过渡影像视角,即左眼过渡影像视角与右眼过渡影像视角在同一位置。
即权值变化法和特征点拟合法等方法的使用是为了解决过渡区域的图像突变问题,在视点区域、过渡区域、和视点外围区域的交界区,需要进行一定的重叠拟合处理,以尽可能减轻图像的突变导致的显示异常,也就是说、视点区域、过渡区域和视点外围区域的渲染范围要有一定的重叠,以便对重叠区进行两者的拟合操作,然后作为该区域的最终图像。
其中,使用权值变化法来实现这一目的可具体通过下述步骤:
按照下述公式计算得到过渡区域的影像中每一像素点的图像参数:
A=W×x+G×(1-x);
其中,A为过渡区域的影像中的任何一个像素点,W为视点外围影像,G为过渡区域影像,x为权值,根据A随过渡区域内边缘距离的增加,权值从1渐变为0。
权值变化法的思路是:过渡区域从内边缘到外边缘,对于使用过渡区域参数渲染的图像来说,权值从1渐变为0(浮点值),对于使用视点外围区域参数渲染的图像来说,权值从0变为1(浮点值),对两幅图像采用浮点运算合成,将两幅图像拟合到一起,作为过渡区域的实际显示图像。
使用特征点拟合法来实现这一目的可具体通过下述步骤:
确定视点影像与过渡影像的拼接重合区域;
将处于拼接重合区域的视点影像作为标准影像;
将处于拼接重合区域的过渡影像作为变换影像;
从标准影像和变换影像中提取得到匹配特征点;
根据匹配特征点对变换影像进行调整,以使得到的调整后变换影像与标准影像拥有更多的匹配特征点。
特征点拟合法的大致思路是:选择一幅图像作为标准图像(此处选择视点影响作为标准图像),另外一幅图像作为变换图像,对标准图像和变换图像首先进行预处理(如直方图、二值化、滤波等)后进行特征扫描和提取(如 边缘检测、灰度检测),特征点可以通过角点、边缘和轮廓来确定,待确定特征点后,将两幅图像中的特征点进行匹配检测,对于能够进行匹配的特征点,确定下来特征点位置,保留特征点位置不变,通过图像旋转、插值等操作进行图像合成,并使得两幅图像中更多的特征点能够重叠,即由更多的特征点经调整后匹配。
第二类,不采用外围影像视点的选取方式,而是分别为左眼过渡区域和右眼过渡区域选取分别选取得到更贴近的左眼过渡影像视角和右眼过渡影像视角,此种方式由于在过渡影像视角的选择上尽可能的去贴近了左眼视点和右眼视点,所以可以无需再按照第一类情况进行重叠拟合处理。也可以理解为,为了消除由于重叠拟合处理带来的额外运算量,第二类采用了尽可能使得过渡影像视角贴近左右眼视角的方案来从尽可能的消除不同区域影像由于视角差异带来的图像突变现象。
其中一种,就是直接将左眼视角作为左眼过渡影像视角、右眼视角作为右眼过渡影像视角,即过渡区域和视点区域的视角相同,相当于过渡区域完全作为了视点区域的延伸,即该方案是通过增大视点区域的面积的方式来尽可能的解决这一问题,但这种方式相当于在初始方案的基础上增加了渲染的数据量,与本申请主要目的在一定程度上相悖,所有本申请提供了另一种实现方案,请参见如图10所示的流程图和与图10方案对应的如图11所示的示意图:
S301:将左眼视角与右眼视角连线的中点确定为中心视角;
S302:将左眼视角与中心视角的中点确定为左眼过渡影像视角;
S303:将右眼视角与中心视角的中点确定为右眼过渡影像视角。
即如图10所示的过渡影像视角选取方式,既不是直接将左眼视角选作左眼过渡影像视角、将右眼视角选作右眼过渡影像视角,也不是直接将左眼视角和右眼视角的中点选择为统一的过渡影像视角,而是进一步折中,将左眼视角与中心视角的中点确定为左眼过渡影像视角、将右眼视角与中心视角的中点确定为右眼过渡影像视角,通过这一操作既使得左眼过渡影像视角在一定程度上贴近了左眼视角、右眼过渡影像视角贴近了右眼视角,又在第一程度上削弱了由于视角不统一带来的图像突变现象,同时还能减少因进行图像 重叠拟合处理带来的额外运算量。一种采用这一方案的示意图可参见图12。
相对应的,在加入过渡区域之后,通过位移判断用户头部是否处于运动状态并在判定处于静止状态时,可以将缩小面积的区域由视点区域改为过渡区域,从而在减少需要渲染的数据量的同时进一步的保障视点区域的影像带给用户的观影体验;也可以保留缩小视点区域面积的基础上,也缩小过渡区域的面积,从而进一步的减少需要渲染的数据量。
因为情况复杂,无法一一列举进行阐述,本领域技术人员应能意识到根据本申请提供的基本方法原理结合实际情况可以存在很多的例子,在不付出足够的创造性劳动下,应均在本申请的保护范围内。
下面请参见图13,图13为本申请实施例所提供的一种VR影像处理装置的结构框图,该装置可以包括:
左眼视点影像获取单元100,用于根据左眼视角为左眼视点区域渲染得到左眼视点影像;
右眼视点影像获取单元200,用于根据右眼视角为右眼视点区域渲染得到右眼视点影像;
外围影像视角选取单元300,用于根据左眼视角和右眼视角的位置确定待选区域,并将待选区域中的任意一点选为外围影像视角;
视点外围影像获取单元400,用于根据外围影像视角为左眼视点外围区域和右眼视点外围区域渲染得到相同的视点外围影像;
影像拼接单元500,用于将视点外围影像分别与左眼视点影像和右眼视点影像进行拼接,得到左眼完整影像和右眼完整影像;
面积调整单元600,用于当左眼视点或右眼视点在预设时长内的位移小于预设位移时,缩小相应视点区域的面积、增大相应视点外围区域的面积。
其中,该外围影像视角选取单元300可以包括:
连线子单元,用于做左眼视角和右眼视角之间的连线;
图像待选区域获取子单元,用于将连线的中点作为圆心,以预设半径做圆,得到圆形待选区域;其中,预设半径不超过连线长度的一半。
其中,该外围影像视角选取单元300可以包括:
外围影像视角优选子单元,用于当待选区域为圆形待选区域时,将圆形待选区域的圆心确定为外围影像视角。
进一步的,该VR影像处理装置还可以包括:
过渡影像视角确定单元,用于根据左眼视角和右眼视角确定过渡影像视角;
过渡影像渲染单元,用于根据过渡影像视角为左眼过渡区域和右眼过渡区域渲染得到左眼过渡影像和右眼过渡影像;其中,左眼过渡区域环绕于左眼视点区域的外边界、左眼视点外围区域环绕于左眼过渡区域的外边界,右眼过渡区域环绕于右眼视点区域的外边界、右眼视点外围区域环绕于右眼过渡区域的外边界;左眼过渡影像和右眼过渡影像的分辨率均低于相应视点影像的分辨率、均高于相应视点外围影像的分辨率。
进一步的,该VR影像处理装置还可以包括:
面积再调整单元,用于当左眼视点或右眼视点在预设时长内的位移小于预设位移时,缩小相应过渡区域的面积、增大相应视点外围区域的面积。
其中,该过渡影像视点确定单元可以包括:
中心视角确定子单元,用于将左眼视角与右眼视角连线的中点确定为中心视角;
第一左眼过渡影像视角确定子单元,用于将左眼视角与中心视角的中点确定为第一左眼过渡影像视角;
第一右眼过渡影像视角确定子单元,用于将右眼视角与中心视角的中点确定为第一右眼过渡影像视角。
其中,该过渡影像视角确定单元可以包括:
第二左眼过渡影像视点及第二右眼过渡影像视点确定子单元,用于将左眼视角与右眼视角连线的中点同时作为第二左眼过渡影像视角和第二右眼过渡影像视角。
进一步的,该VR影像处理装置还包括:
影像优化处理单元,用于利用权值变化法或特征点拟合法对不同区域的影像的拼接处进行影像优化处理。
其中,该影像优化处理单元包括权值变化法处理子单元,该权值变化法处理子单元包括:
公式计算模块,用于按照下述公式计算得到过渡区域的影像中每一像素点的图像参数:
A=W×x+G×(1-x);
其中,A为过渡区域的影像中的任何一个像素点,W为视点外围影像,G为过渡区域影像,x为权值,根据A随过渡区域内边缘距离的增加,权值从1渐变为0。
其中,该影像优化处理单元包括特征点拟合法处理子单元,该特征点拟合法处理子单元包括:
拼接重合区域确定模块,用于确定视点影像与过渡影像的拼接重合区域;
标准影响选取模块,用于将处于拼接重合区域的视点影像作为标准影像;
变换影响选取模块,用于将处于拼接重合区域的过渡影像作为变换影像;
匹配特征点提取模块,用于从标准影像和变换影像中提取得到匹配特征点;
基于匹配特征点的调整模块,用于根据匹配特征点对变换影像进行调整,以使得到的调整后变换影像与标准影像拥有更多的匹配特征点。
本实施例作为对应于上述方法实施例的装置实施例存在,具有方法实施例的全部有益效果,此处不再一一赘述。
基于上述实施例,本申请还提供了一种VR眼镜,该VR眼镜可以包括存储器和处理器,其中,该存储器中存有计算机程序,该处理器在调用该存储器中的该计算机程序时,可以实现上述实施例所提供的VR影像处理方法的各步骤。当然,该VR眼镜还可以包括各种必要的网络接口、电源以及其它零部件等。
本申请还提供了一种可读存储介质,其上存有计算机程序,该计算机程序被执行终端或处理器执行时可以实现上述实施例所提供的VR影像处理方法的各步骤。该存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对 于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想。对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请权利要求的保护范围内。
还需要说明的是,在本说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其它变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其它要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。

Claims (14)

  1. 一种VR影像处理方法,其特征在于,包括:
    根据左眼视角为左眼视点区域渲染得到左眼视点影像;
    根据右眼视角为右眼视点区域渲染得到右眼视点影像;
    根据所述左眼视角和所述右眼视角的位置确定待选区域,并将所述待选区域中的任意一点选为外围影像视角;
    根据所述外围影像视角为左眼视点外围区域和右眼视点外围区域渲染得到相同的视点外围影像;
    将所述视点外围影像分别与所述左眼视点影像和所述右眼视点影像进行拼接,得到左眼完整影像和右眼完整影像;
    当左眼视点或右眼视点在预设时长内的位移小于预设位移时,缩小相应视点区域的面积、增大相应视点外围区域的面积。
  2. 根据权利要求1所述的VR影像处理方法,其特征在于,根据所述左眼视角和所述右眼视角的位置确定待选区域,包括:
    做所述左眼视角和所述右眼视角之间的连线;
    将所述连线的中点作为圆心,以预设半径做圆,得到圆形待选区域;其中,所述预设半径不超过所述连线长度的一半。
  3. 根据权利要求2所述的VR影像处理方法,其特征在于,当所述待选区域为所述圆形待选区域时,将所述待选区域中的任意一点选为外围影像视角,包括:
    将所述圆形待选区域的圆心确定为所述外围影像视角。
  4. 根据权利要求1所述的VR影像处理方法,其特征在于,所述视点外围影像的分辨率低于所述左眼视点影像和所述右眼视点影像的分辨率。
  5. 根据权利要求1至4任一项所述的VR影像处理方法,其特征在于,还包括:
    根据所述左眼视角和所述右眼视角确定过渡影像视角;
    根据所述过渡影像视角为左眼过渡区域和右眼过渡区域渲染得到左眼过渡影像和右眼过渡影像;其中,所述左眼过渡区域环绕于所述左眼视点区域的外边界、所述左眼视点外围区域环绕于所述左眼过渡区域的外边界,所述 右眼过渡区域环绕于所述右眼视点区域的外边界、所述右眼视点外围区域环绕于所述右眼过渡区域的外边界;所述左眼过渡影像和所述右眼过渡影像的分辨率均低于相应视点影像的分辨率、均高于相应视点外围影像的分辨率。
  6. 根据权利要求5所述的VR影响处理方法,其特征在于,当左眼视点或右眼视点在预设时长内的位移小于预设位移时,还包括:
    缩小相应过渡区域的面积、增大相应视点外围区域的面积。
  7. 根据权利要求5所述的VR影像处理方法,其特征在于,根据所述左眼视角和所述右眼视角确定过渡影像视角,包括:
    将所述左眼视角与所述右眼视角连线的中点确定为中心视角;
    将所述左眼视角与所述中心视角的中点确定为第一左眼过渡影像视角;
    将所述右眼视角与所述中心视角的中点确定为第一右眼过渡影像视角。
  8. 根据权利要求5所述的VR影像处理方法,其特征在于,根据所述左眼视角和所述右眼视角确定过渡影像视角,包括:
    将所述左眼视角与所述右眼视角连线的中点同时作为第二左眼过渡影像视角和第二右眼过渡影像视角。
  9. 根据权利要求8所述的VR影像处理方法,其特征在于,还包括:
    利用权值变化法或特征点拟合法对不同区域的影像的拼接处进行影像优化处理。
  10. 根据权利要求9所述的VR影像处理方法,其特征在于,利用权值变化法对不同区域的影像的拼接处进行影像优化,包括:
    按照下述公式计算得到过渡区域的影像中每一像素点的图像参数:
    A=W×x+G×(1-x);
    其中,A为过渡区域的影像中的任何一个像素点,W为所述视点外围影像,G为过渡区域影像,x为权值,根据所述A随所述过渡区域内边缘距离的增加,所述权值从1渐变为0。
  11. 根据权利要求9所述的VR影像处理方法,其特征在于,利用特征点拟合法对不同区域的影像的拼接处进行影像优化处理,包括:
    确定视点影像与过渡影像的拼接重合区域;
    将处于所述拼接重合区域的视点影像作为标准影像;
    将处于所述拼接重合区域的过渡影像作为变换影像;
    从所述标准影像和所述变换影像中提取得到匹配特征点;
    根据所述匹配特征点对所述变换影像进行调整,以使得到的调整后变换影像与所述标准影像拥有更多的匹配特征点。
  12. 一种VR影像处理装置,其特征在于,包括:
    左眼视点影像获取单元,用于根据左眼视角为左眼视点区域渲染得到左眼视点影像;
    右眼视点影像获取单元,用于根据右眼视角为右眼视点区域渲染得到右眼视点影像;
    外围影像视角选取单元,用于根据所述左眼视角和所述右眼视角的位置确定待选区域,并将所述待选区域中的任意一点选为外围影像视角;
    视点外围影像获取单元,用于根据所述外围影像视角为左眼视点外围区域和右眼视点外围区域渲染得到相同的视点外围影像;
    影像拼接单元,用于将所述视点外围影像分别与所述左眼视点影像和所述右眼视点影像进行拼接,得到左眼完整影像和右眼完整影像;
    面积调整单元,用于当左眼视点或右眼视点在预设时长内的位移小于预设位移时,缩小相应视点区域的面积、增大相应视点外围区域的面积。
  13. 一种VR眼镜,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于在执行所述计算机程序时,可实现如权利要求1至11任一项所述的VR影像处理方法的各步骤。
  14. 一种可读存储介质,其特征在于,所述可读存储介质中存储有计算机程序,所述计算机程序在被处理器调用并执行时,可实现如权利要求1至11任一项所述的VR影像处理方法的各步骤。
PCT/CN2019/130402 2019-11-28 2019-12-31 一种vr影像处理方法、装置、vr眼镜及可读存储介质 WO2021103270A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/606,163 US11785202B2 (en) 2019-11-28 2019-12-31 VR image processing method and apparatus, VR glasses and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911191691.7A CN111314687B (zh) 2019-11-28 2019-11-28 一种vr影像处理方法、装置、vr眼镜及可读存储介质
CN201911191691.7 2019-11-28

Publications (1)

Publication Number Publication Date
WO2021103270A1 true WO2021103270A1 (zh) 2021-06-03

Family

ID=71146705

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130402 WO2021103270A1 (zh) 2019-11-28 2019-12-31 一种vr影像处理方法、装置、vr眼镜及可读存储介质

Country Status (3)

Country Link
US (1) US11785202B2 (zh)
CN (1) CN111314687B (zh)
WO (1) WO2021103270A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660480A (zh) * 2021-08-16 2021-11-16 纵深视觉科技(南京)有限责任公司 一种环视功能实现方法、装置、电子设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857336B (zh) * 2020-07-10 2022-03-25 歌尔科技有限公司 头戴式设备及其渲染方法、存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105723705A (zh) * 2013-11-20 2016-06-29 皇家飞利浦有限公司 用于自动立体多视图显示器的图像的生成
CN107431796A (zh) * 2015-05-27 2017-12-01 谷歌公司 全景虚拟现实内容的全方位立体式捕捉和渲染
CN108174178A (zh) * 2018-01-09 2018-06-15 重庆爱奇艺智能科技有限公司 一种图像显示方法、装置及虚拟现实设备
US20180278916A1 (en) * 2015-08-07 2018-09-27 Samsung Electronics Co., Ltd. Electronic device for generating 360-degree three-dimensional image and method therefor
CN109739460A (zh) * 2019-01-04 2019-05-10 京东方科技集团股份有限公司 Vr显示方法及设备、计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9288476B2 (en) * 2011-02-17 2016-03-15 Legend3D, Inc. System and method for real-time depth modification of stereo images of a virtual reality environment
CN105933690A (zh) * 2016-04-20 2016-09-07 乐视控股(北京)有限公司 一种自适应调整3d画面内容大小的方法和装置
JP6994868B2 (ja) * 2017-08-09 2022-01-14 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置、復号装置、符号化方法、および復号方法
KR102473840B1 (ko) * 2017-11-21 2022-12-05 삼성전자주식회사 디스플레이 드라이버 및 모바일 전자 기기

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105723705A (zh) * 2013-11-20 2016-06-29 皇家飞利浦有限公司 用于自动立体多视图显示器的图像的生成
CN107431796A (zh) * 2015-05-27 2017-12-01 谷歌公司 全景虚拟现实内容的全方位立体式捕捉和渲染
US20180278916A1 (en) * 2015-08-07 2018-09-27 Samsung Electronics Co., Ltd. Electronic device for generating 360-degree three-dimensional image and method therefor
CN108174178A (zh) * 2018-01-09 2018-06-15 重庆爱奇艺智能科技有限公司 一种图像显示方法、装置及虚拟现实设备
CN109739460A (zh) * 2019-01-04 2019-05-10 京东方科技集团股份有限公司 Vr显示方法及设备、计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660480A (zh) * 2021-08-16 2021-11-16 纵深视觉科技(南京)有限责任公司 一种环视功能实现方法、装置、电子设备及存储介质
CN113660480B (zh) * 2021-08-16 2023-10-31 纵深视觉科技(南京)有限责任公司 一种环视功能实现方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN111314687A (zh) 2020-06-19
US11785202B2 (en) 2023-10-10
CN111314687B (zh) 2021-06-25
US20220247996A1 (en) 2022-08-04

Similar Documents

Publication Publication Date Title
US11350081B2 (en) Head mounted display device and method for providing visual aid using same
JP4578294B2 (ja) 立体視画像表示装置、立体視画像表示方法及びコンピュータプログラム
US20180192022A1 (en) Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices
CN114063302B (zh) 光学像差校正的方法和装置
US10067349B2 (en) Method of adapting a virtual reality helmet
EP2395759B1 (en) Autostereoscopic display device and method for operating an autostereoscopic display device
EP2357841B1 (en) Method and apparatus for processing three-dimensional images
US10082867B2 (en) Display control method and display control apparatus
US20140267284A1 (en) Vision corrective display
US8866881B2 (en) Stereoscopic image playback device, stereoscopic image playback system, and stereoscopic image playback method
WO2021103270A1 (zh) 一种vr影像处理方法、装置、vr眼镜及可读存储介质
KR20090035880A (ko) 원소스 멀티유즈 스테레오 카메라 및 스테레오 영상 컨텐츠제작방법
WO2021103267A1 (zh) 一种vr影像处理方法、装置、vr眼镜及可读存储介质
US11533443B2 (en) Display eyewear with adjustable camera direction
CN108769664A (zh) 基于人眼跟踪的裸眼3d显示方法、装置、设备及介质
US20100135580A1 (en) Method for adjusting video frame
JP2001128195A (ja) 立体画像補正装置、立体画像表示装置および立体画像補正処理プログラムを記録した記録媒体
WO2019095095A1 (zh) 头戴显示设备图像调整方法及头戴显示设备
CN111815382A (zh) 一种基于人脸识别技术的眼镜虚拟试戴方法
JP7429515B2 (ja) 画像処理装置、ヘッドマウントディスプレイ、および画像表示方法
JP2021124520A (ja) 画像表示装置、画像表示用プログラム及び画像表示方法
JP2005195822A (ja) 画像表示装置
WO2023108744A1 (zh) Vr设备的调控方法、调控装置、vr设备、系统及存储介质
Hwang et al. Augmented Edge Enhancement on Google Glass for Vision‐Impaired Users
CN111554223A (zh) 显示设备的画面调节方法、显示设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19953949

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19953949

Country of ref document: EP

Kind code of ref document: A1