WO2023036218A1 - Method and apparatus for determining width of viewpoint - Google Patents

Method and apparatus for determining width of viewpoint Download PDF

Info

Publication number
WO2023036218A1
WO2023036218A1 PCT/CN2022/117710 CN2022117710W WO2023036218A1 WO 2023036218 A1 WO2023036218 A1 WO 2023036218A1 CN 2022117710 W CN2022117710 W CN 2022117710W WO 2023036218 A1 WO2023036218 A1 WO 2023036218A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
area
group
average difference
width
Prior art date
Application number
PCT/CN2022/117710
Other languages
French (fr)
Chinese (zh)
Inventor
贺曙
徐万良
Original Assignee
未来科技(襄阳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 未来科技(襄阳)有限公司 filed Critical 未来科技(襄阳)有限公司
Publication of WO2023036218A1 publication Critical patent/WO2023036218A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the present application belongs to the field of naked-eye 3D, and in particular relates to a method for determining the width of a viewpoint and a device thereof.
  • Naked-eye 3D short for Autostereoscopy, is a general term for technologies that achieve stereoscopic visual effects without the help of external tools such as polarized glasses.
  • the device collects images through the front camera and tracks the position of the human eye, then calculates the viewpoint corresponding to the current position of the human eye, collects images through the set front camera and tracks the human eye
  • the process of positioning needs to determine the width of each viewpoint in the naked-eye 3D system.
  • the width of the viewpoint is mainly deduced through optical design, but in actual use, a grating is often used on a variety of third-party devices, and it is impossible to obtain the exact optical parameters of the screen of the device, such as glass thickness, optical The thickness of the glue, the size of the assembly gap, etc., which leads to the inability to accurately determine the width of the viewpoint.
  • the purpose of this application is to provide a method and device for determining the width of the viewpoint, which can quickly determine the width of the viewpoint corresponding to the device when the optical parameters of the screen of the device are unknown, and then use the width of the viewpoint to determine the 3D image or 3D image displayed by the device.
  • the video is adjusted to improve the viewing experience of the user.
  • the first aspect of the embodiment of the present application provides a method for determining the width of a viewpoint, including:
  • the first device displays the target image in a stereoscopic mode, so that the second device shoots the target image in real time to obtain a first group of images and a second group of images, and respectively
  • the two images and the two images in the second group of images are divided to obtain the first area, the second area, the third area and the fourth area, and the calculation of the first area and the second area
  • the different positions are obtained by shooting the target image at the same time with two cameras arranged on the second device, and the two cameras of the second device are on the same horizontal line, and the center distance between the two cameras of the second device is is a preset distance, the first area and the second area correspond to two images in the first group of images, and the third area and the fourth area correspond to the second group of images
  • the two images in the image correspond to each other;
  • the first device If the first device receives the first coordinate recording instruction sent by the second device when the average difference of the first pixels reaches the first preset value, the first device records the photograph taken by the second device The first position coordinates of the first group of images;
  • the first device If the first device receives the second coordinate recording instruction sent by the second device when the second pixel average difference reaches a second preset value, the first device records the second location coordinates when the second group of images was taken;
  • the first device determines the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
  • the second aspect of the embodiment of the present application provides a method for determining the width of a viewpoint, including:
  • the second device shoots the target image displayed by the first device in stereo mode in real time to obtain a first group of images and a second group of images, the first group of images and the second group of images are the
  • the second device obtains the target image by shooting the target image simultaneously with two cameras arranged on the second device at different positions, and the two cameras of the second device are at the same horizontal line, and the two cameras of the second device
  • the distance between the centers of the cameras is the preset distance
  • the second device respectively segments two images in the first group of images and two images in the second group of images to obtain a first region, a second region, a third region and a fourth region area, wherein the first area and the second area correspond to two images in the first set of images, and the third area and the fourth area correspond to the second set of images
  • the two images in correspond to;
  • the second device respectively calculates a first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area;
  • the second device sends a first coordinate recording instruction to the first device, so that the first device records the first coordinate position, and the The first position coordinates are the coordinates of the position of the second device when capturing the first group of images;
  • the second device sends a second coordinate recording instruction to the first device, so that the first device records the second position coordinates, and according to The lamination angle of the grating corresponding to the first position coordinate, the second position coordinate and the first device determines the width of the viewpoint corresponding to the first device, and the second position coordinate is the width of the viewpoint corresponding to the first device.
  • the coordinates of the location of the second device when capturing the second group of images.
  • the third aspect of the embodiment of the present application provides an apparatus for determining a viewpoint width, including a first device, and the first device includes:
  • a display unit configured to display the target image in a stereoscopic mode, so that the second device shoots the target image in real time to obtain a first group of images and a second group of images, and respectively compares the first group of images
  • the two images in the image and the two images in the second group of images are divided to obtain the first area, the second area, the third area and the fourth area, and the calculation of the first area and the second
  • the device is obtained by simultaneously shooting the target image with two cameras installed on the second device at different positions, and the two cameras of the second device are on the same horizontal line, and the two cameras of the second device
  • the distance between centers is a preset distance, the first area and the second area correspond to two images in the first group of images, the third area and the fourth area correspond to the second The two images in the group image correspond;
  • a recording unit configured to record the first coordinate recording instruction sent by the second device when the first pixel average difference reaches a first preset value, if the first device receives the first coordinate recording instruction.
  • the determining unit is configured to determine the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
  • the second device when determining the viewpoint width of the first device, can shoot the first device at different positions Obtaining a plurality of images, and dividing the plurality of images to obtain two image regions, and calculating the pixel average difference of the two image regions, and then when the pixel average difference reaches a preset value, the first A device records the position coordinates of the second device when the preset value is reached, and then the first device calculates the position coordinates of the second device at different positions and the bonding angle of the grating.
  • the width of the corresponding viewpoint so that when the optical parameters of the screen of the first device are unknown, the width of the corresponding viewpoint of the first device can be quickly determined, and then the stereoscopic display displayed by the first device can be made based on the width of the viewpoint.
  • Image or stereoscopic video can be adjusted to improve user viewing experience.
  • Fig. 1 is the functional block diagram that the position indicator of related art constitutes
  • Fig. 2 is the structural block diagram of the position detection system provided by the present application.
  • Fig. 3 is a structural block diagram of the button detection circuit shown in Fig. 2;
  • Fig. 4 is a structural block diagram of the pressure detection circuit shown in Fig. 2;
  • FIG. 5 is a schematic diagram of a data communication transmission benchmark of a digital stylus
  • Fig. 6 is a schematic diagram of data communication between a digital stylus and a tablet.
  • FIG. 1 is a schematic diagram of an embodiment of a method for determining the width of a viewpoint provided by the embodiment of the present application, including:
  • the target image displayed by the first device in stereo mode so that the second device shoots the target image displayed by the first device in stereo mode in real time to obtain a first group of images and a second group of images, and respectively segment the two images in the first group of images and the two images in the second group of images to obtain the first region, the second region, the third region and the fourth region, and calculate the A first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area.
  • the first group of images and the second group of images are obtained by the second device shooting the target image at different positions through two cameras installed on the second device at the same time , and the two cameras of the second device are on the same horizontal line, the distance between the centers of the two cameras of the second device is a preset distance, the first area and the second area are different from the first group of images corresponding to the two images in the image, the third area and the fourth area correspond to the two images in the second group of images, and the second device has an image acquisition function and a communication function
  • the target image is a half-black half-color image, and the color in the half-color refers to visible colors such as white, red, green, and yellow.
  • the first group of images and the second group of images Each image in the image may not only contain the target image, but may also include other content, so what needs to be determined here is the average pixel difference of the screen area, not the average pixel difference of each image value, the screen area is the display area of the target image on the screen of the first device.
  • the first device may also display the location coordinates of the second device in real time on the screen corresponding to the first device, or directly display the location coordinates of the camera of the second device. location coordinates.
  • the first device If the first device receives the first coordinate recording instruction sent by the second device when the first pixel average difference reaches a first preset value, the first device records the second The first location coordinates of the location where the device is when capturing the first group of images.
  • the second device divides the first group of images to obtain the first area and the second area, and calculates the After the first pixel average difference, it can be judged whether the first pixel average difference reaches the first preset value, and if the first pixel average difference reaches the first preset value, then the first The second device may send the first coordinate recording instruction, and the first device records the first position coordinates of the position where the second device is at when the first group of images is taken according to the first coordinate recording instruction .
  • the first device If the first device receives the second coordinate recording instruction sent by the second device when the second pixel average difference reaches a second preset value, the first device records the first The second location coordinates of the location where the second device is when capturing the second group of images.
  • the second device obtains the third area and the fourth area by segmenting the second group of images, and calculates the After the second pixel average difference, it can be judged whether the second pixel average difference reaches the second preset value, and if the second pixel average difference reaches the second preset value, the first The second device may send the second coordinate recording instruction, and the first device records the second position where the second device is at when the second group of images is taken according to the second coordinate recording instruction. coordinate.
  • the first device determines the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
  • the first device after the first device records the first position coordinates and the second position coordinates, it can acquire the bonding angle of the corresponding grating of the first device (the bonding angle is the The lamination angle of the grating of the 3D film pasted on the first device, in addition, here is not limited to the way of obtaining the lamination angle of the grating, for example, can be input by the user), and according to the first position coordinates, the second The two position coordinates and the bonding angle determine the width of the viewpoint corresponding to the first device.
  • the first device determining the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device includes:
  • the first device determines the first position of the second device according to the bonding angle and the first position coordinates, and the first position is when the first pixel average difference reaches the first preset the position of the second device at the time of value;
  • the first device determines a second position of the second device according to the bonding angle and the second position coordinates, and the second position is when the average difference of the second pixels reaches the second preset the position of the second device at the time of value;
  • the first device determines a width of the viewpoint based on the first position and the second position.
  • the first device may calculate the first position by the following formula based on the bonding angle and the first position coordinates, the first position is when the average difference of the first pixels reaches the The position where the second device captures the first group of images when the first preset value is mentioned:
  • X 0 ′ is the first position
  • the coordinates of the first position are (x 0 , y 0 )
  • y is a preset constant
  • a is the bonding angle
  • the first device may calculate the second position by the following formula based on the bonding angle and the second position coordinates, the second position is when the average difference of the second pixels reaches the second preset The position where the second device captures the second group of images when the value is:
  • X 1 ′ is the second position, and the coordinates of the second position are (x 1 , y 1 );
  • the first device calculates the first position and the second position according to the formula, it can calculate the width of the viewpoint corresponding to the first device by the following formula based on the first position and the second position :
  • VW abs(X 0 ′-X 1 ′);
  • VW is the width of the viewpoint corresponding to the first device, and abs is an absolute value function.
  • FIG. 2 is a schematic diagram of the calculation of the width of the viewpoint provided by the embodiment of the present application, wherein 201 is the first position coordinate (x 0 , y 0 ), and 202 is the first position coordinate (x 0 , y 0 ).
  • the first device may obtain the Corresponding to the width of the grating, then, according to the width of the grating and the width of the viewpoint, determine the arrangement and layout of the viewpoint corresponding to the first device, and adjust the The stereoscopic images displayed by the first device are adjusted while operating in the stereoscopic mode.
  • the arrangement of the grating of the 3D film pasted on the screen of the first device can be deduced , and then the 3D image displayed or the 3D video played when the first device operates in the stereo mode can be adjusted according to the change of the position of the human eyes, so as to provide users with a better 3D display effect.
  • the position of the first device remains unchanged, the second device changes its own position, and the position coordinates of the camera of the second device are tracked by the first device, And record the position coordinates of the second device when changing its position, of course, it can also be in other ways, for example, the position of the second device remains unchanged, and the first device changes its position to record the camera of the second device
  • the location coordinates are not specifically limited, as long as the location coordinates when the second device takes images of the first device at different positions can be recorded. It can be understood that when the position of the detector remains unchanged and the coordinates of the human eyes of the detector are recorded by changing the position of the target device, the specific execution process is as follows:
  • the second device takes a picture of the first device, and the screen of the first device displays the position coordinates of the second device (of course The position coordinates of the camera of the second device may be displayed, not specifically limited), the first device adjusts its position until it receives the coordinate recording instruction of the second device, and records the position coordinates of the second device at the current position , the coordinate recording instruction is used by the second device to analyze the image obtained by shooting the first device to obtain the corresponding pixel average difference, and the pixel average difference reaches the first preset value or The second preset value is issued; then continue to adjust the position until the coordinate recording instruction sent by the second device is received again, and record the position coordinates of the second device at the current position, and the coordinate recording instruction is issued by the second device When the second device analyzes the image obtained by shooting the first device to obtain the corresponding pixel average difference value, and the pixel average difference value reaches the first preset value or the second preset value Sent; thus, the position coordinates of two
  • the second device when determining the viewpoint width of the first device, can capture multiple images of the first device at different positions , and segment a plurality of images to obtain two image regions, and calculate the pixel average difference between the two image regions, and then when the pixel average difference reaches a preset value, the first device records the reached
  • the preset value is the position coordinates of the second device, and then the first device calculates the width of the viewpoint corresponding to the first device according to the position coordinates of the second device at different positions and the lamination angle of the grating , so that when the optical parameters of the screen of the first device are unknown, the width of the viewpoint corresponding to the first device can be quickly determined, and then the stereoscopic image or stereoscopic video displayed by the first device can be adjusted according to the width of the viewpoint. Make adjustments to improve user viewing experience.
  • Figure 3 is a schematic diagram of the application scenario provided by the embodiment of the present application.
  • the position of the first device is fixed, the position of the second device changes, and the first device records the When the pixel average difference value of the image taken by the second device reaches the preset value, the position coordinates of the position of the second device, as shown in FIG.
  • the first device 301 displays the target image in stereoscopic mode, and the second device monitors the first device at different positions through two cameras arranged on the second device
  • the device shoots the target image displayed in a stereo mode to obtain the corresponding image, and divides the image to obtain two image areas, and calculates the pixel average difference between the two image areas, and then judges the Whether the pixel average difference reaches the first preset value or the second preset value, if the pixel average difference reaches the first preset value or the second preset value, the The first device 301 sends the coordinate recording instruction, and after the first device 301 receives the coordinate recording instruction, the first device 301 records the position coordinates, as shown in FIG.
  • the second device is at 302 position, the first pixel average difference value of the image captured by the first device 301 reaches the first preset value, and at this time, the first device 301 receives the first coordinate recording instruction , the first device 301 may record the coordinates of the first position where the second device is at the position 302; then the second device changes its position again and takes another shot of the first device 301, Obtaining the corresponding image, and dividing the image to obtain two image regions, and calculating the pixel average difference of the two image regions, and then determining whether the pixel average difference reaches the second preset value, If the second pixel average difference reaches the second preset value, the second device sends the second coordinate recording instruction to the first device 301, and at this time, the first device 301 The second coordinate recording instruction records the coordinates of the second position.
  • the second device sends an instruction to record the coordinates of the second position to the first device 301, and the first device 301 records that the second device is in the 303 position according to the second coordinate recording instruction.
  • the second position coordinates when the second device is in the position, and then the first device 301 can use the first position coordinates when the second device is in the position 302, the coordinates when the second device is in the position 303 Calculate the width of the viewpoint corresponding to the first device 301 by the lamination angle of the second position coordinates and the grating corresponding to the first device 301, and then calculate the width of the first device 301 according to the width of the viewpoint.
  • 3D images or 3D videos displayed in stereoscopic mode can be adjusted.
  • the width of the viewpoint corresponding to the first device 301 can be quickly determined, and then the 3D displayed by the first device 301 can be adjusted according to the width of the viewpoint.
  • Image or 3D video can be adjusted to improve user viewing experience.
  • the second device shoots the first device at one position through two cameras installed on the second device to obtain a set of pictures of the first device captured at the position After imaging, the image is directly segmented to obtain the image area, and the pixel average difference corresponding to the image area is calculated, and it is judged whether the pixel average difference reaches the first preset value or the second preset value.
  • the method for determining the viewpoint width provided by the embodiment of the present application is described above from the perspective of the first device in conjunction with FIG. 1 , and the method for determining the viewpoint width provided in the embodiment of the present application is described below from the perspective of the second device in conjunction with FIG. 4 .
  • FIG. 4 is a schematic diagram of another embodiment of the method for determining the viewpoint width provided by the embodiment of the present application, including:
  • the second device shoots in real time the target image displayed by the first device in a stereo mode, so as to obtain a first group of images and a second group of images.
  • the first group of images and the second group of images are the second device simultaneously shooting the target image at different positions through two cameras set on the second device obtained, and the two cameras of the second device are on the same horizontal line, the distance between the centers of the two cameras of the second device is a preset distance, such as 65mm, and the second device is equipped with an image acquisition function and a communication function
  • the target image is a half-black half-color image, and the color in the half-color refers to visible colors such as white, red, green, and yellow.
  • the first device displays half black and half half in 3D mode.
  • the second device takes pictures of the first device at different positions to obtain the first group of images and the second group of images.
  • the second device respectively segments two images in the first group of images and two images in the second group of images to obtain a first area, a second area, a third area, and Fourth area.
  • the second device after the second device obtains the first group of images and the second group of images, it can respectively segment two images in the first group of images to obtain the The first area and the second area, and the two images in the second group of images are segmented to obtain the third area and the fourth area, that is, due to the setting on the second device
  • the image captured by a single camera will include other content besides the target image.
  • the purpose of the segmentation here is to ensure that the first area, the second area , the third area and the fourth area only include the target image, and do not include other content.
  • the second device respectively calculates a first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area.
  • the second device divides the first group of images and the second group of images to obtain the first region, the second region, the third region and the After the fourth area, the first pixel average difference between the first area and the second area can be calculated respectively, and the second pixel average difference between the third area and the fourth area can be calculated value.
  • the second device may calculate the first pixel average difference and the second pixel average difference by the following formula:
  • aver_piexl is the average difference value of the first pixel or the average difference value of the second pixel
  • A is the screen area
  • Al is the first area or the third area
  • Ar is the second area or the The fourth area
  • w is the width of the first area or the width of the third area
  • h is the height of the second area or the height of the fourth area
  • the first area and the first area The two areas have the same width and the same height
  • the third area and the fourth area have the same width and the same height.
  • the first group of images and the second group of images may not only contain the target image, but may also include other content, so what needs to be determined here is the average pixel difference value of the screen area, not the pixel average difference value of the first group of images value, the screen area is the display area of the target image on the screen of the first device.
  • the first device may also display the position coordinates of the second device in real time on the screen corresponding to the first device, or directly display the location coordinates of the camera of the second device location coordinates.
  • the second device sends a first coordinate recording instruction to the first device, so that the first device records the first position coordinates.
  • the second device after the second device analyzes the first group of images to obtain the first pixel average difference, it can determine whether the first pixel average difference reaches the first preset value, if the first pixel average difference reaches the first preset value, then the second device sends the first coordinate recording instruction to the first device, so that the first device
  • the first coordinate recording instruction records the first position coordinates, wherein the first position coordinates are the coordinates of the position of the second device when capturing the first group of images.
  • the first device displays the position coordinates of the second device in real time, and when the second device determines that the average difference of the first pixels reaches the first preset value, it may send a message to the second device A device sends the first coordinate recording instruction, and after receiving the first coordinate recording instruction, the first device may record the position coordinates of the second device where the first group of images is taken .
  • the second device changes position and shoots the first device again to obtain an image captured after the position change, and analyzing the image until the average pixel difference of the image captured after changing the position reaches the first preset value, sending the first coordinate recording instruction to the first device, so that the The first device records the location coordinates of the location.
  • the second device sends a second coordinate recording instruction to the first device, so that the first device records the second position coordinates, And determine the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the bonding angle of the grating corresponding to the first device.
  • the second device after the second device analyzes the second group of images to obtain the second pixel average difference, it can determine whether the second pixel average difference reaches the second preset value, if the second pixel average difference reaches the second preset value, the second device sends the second coordinate record instruction to the first device, so that the first device records the The second position coordinates, and determine the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device, wherein the The second position coordinates are the coordinates of the position of the second device when capturing the second group of images.
  • the first device displays the position coordinates of the second device in real time, and when the second device determines that the average difference of the second pixels reaches the second preset value, it may send a message to the second device
  • a device sends the second coordinate recording instruction, and after receiving the second coordinate recording instruction, the first device can record the position coordinates of the position, and calculate the viewpoint according to the two position coordinates and the lamination angle width.
  • the second device changes position and shoots the first device again to obtain an image captured after the position change, and analyzing the image until the average pixel difference of the image captured after changing the position reaches the second preset value, and sending the second coordinate recording instruction to the first device, so that the second A device records the location coordinates of the location.
  • the second device after the second device captures the first device at a position to obtain the image of the first device captured at the position, it directly analyzes the image to obtain the corresponding pixel average difference, and judge whether the pixel average difference reaches the first preset value or the second preset value, if not, then change the position of the second device and repeat the above steps until it finds
  • the pixel average difference value of the image of the first device captured at a certain position reaches the first preset value, and when the first preset value is reached, sending the first coordinates to the first device Recording instructions, so that the first device records the corresponding position coordinates, and then continues to adjust the position to perform the above steps until the average pixel difference of the image of the first device captured at another position reaches the second preset value, sending the second coordinate record command to the first device, so that the first device records the coordinates of the position, and calculates the width of the viewpoint according to the coordinates of the two positions and the lamination angle.
  • the method for determining the viewpoint width provided by the embodiment of the present application is described above from the perspectives of the second device and the first device, and the method for determining the viewpoint width provided by the embodiment of the present application is described below from the perspective of interaction between the first device and the second device in conjunction with FIG. 5 Determine the method described.
  • FIG. 5 is a schematic diagram of another embodiment of the method for determining the viewpoint width provided by the embodiment of the present application, including:
  • the first device displays a target image in a stereoscopic mode.
  • the second device shoots the target image in real time to obtain a first group of images and a second group of images.
  • the second device respectively segments two images in the first group of images and two images in the second group of images to obtain a first area, a second area, a third area, and Fourth area.
  • the second device respectively calculates a first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area.
  • steps 501 to 504 are similar to steps 401 to 403 in FIG. 4 , which have been described in detail in FIG. 4 above, and will not be repeated here.
  • the second device sends a first coordinate recording instruction to the first device.
  • the first device records the first location coordinates according to the first coordinate recording instruction.
  • the second device sends a second coordinate recording instruction to the first device.
  • the first device records the second location coordinates according to the second coordinate recording instruction.
  • steps 505 to 508 are similar to the steps of recording location coordinates in FIG. 1 and FIG. 4 , which have been described in detail in FIG. 1 and FIG. 4 , and will not be repeated here.
  • the first device determines the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
  • step 510 is similar to step 104 in FIG. 1 , which has been described in detail in FIG. 1 above, and details are not repeated here.
  • the width of the viewpoint corresponding to the first device calculated by the first device according to the first position coordinate, the second position coordinate and the lamination angle of the grating is For example, after the first device records the first position coordinates and the second position coordinates, it may send the first position coordinates and the second position coordinates to the second device, the second device calculates the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device, and then the The width of the viewpoint corresponding to the first device is sent to the first device; in addition, the average pixel difference obtained by analyzing the first group of images and the second group of images may also be obtained by the first device Execution, the details are not limited here.
  • the embodiments of the present application are described above from the perspective of the method for determining the viewpoint width, and the embodiments of the present application are described below from the perspective of the apparatus for determining the viewpoint width.
  • the apparatus for determining the viewpoint width includes a first device 600 and a second device 700 .
  • FIG. 6 is a schematic diagram of a virtual structure of a first device provided in an embodiment of the present application.
  • the first device 600 includes:
  • the display unit 601 is configured to display the target image in a stereoscopic mode, so that the second device shoots the target image in real time to obtain a first group of images and a second group of images, and respectively compares the first group of images
  • the two images in the image and the two images in the second group of images are divided to obtain the first area, the second area, the third area and the fourth area, and the calculation of the first area and the second
  • the second device is obtained by simultaneously shooting the target image with two cameras installed on the second device at different positions, and the two cameras of the second device are on the same horizontal line, and the two cameras of the second device
  • the distance between centers is a preset distance
  • the first area and the second area correspond to two images in the first group of images
  • the third area and the fourth area correspond to the first group of images
  • the recording unit 602 is configured to: if the first device receives the first coordinate recording instruction sent by the second device when the first pixel average difference reaches a first preset value, the first device records The first position coordinates when the second device captures the first group of images; if the first device receives the second set when the second pixel average difference reaches a second preset value
  • the second coordinate recording instruction sent is to record and record the second position coordinates when the second device captures the second group of images;
  • the determining unit 603 is configured to determine the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
  • FIG. 7 is a schematic diagram of the virtual structure of the second device provided in the embodiment of the present application.
  • the second device 700 includes:
  • the photographing unit 701 is configured to photograph the target image displayed by the first device in stereoscopic mode in real time to obtain a first group of images and a second group of images, the first group of images and the second group of images
  • the second device obtains the target image by shooting the target image at different positions through the two cameras installed on the second device at the same time, and the two cameras of the second device are on the same horizontal line, the second device
  • the distance between the centers of the two cameras is the preset distance
  • a segmentation unit 702 configured to segment the two images in the first group of images and the two images in the second group of images respectively to obtain the first region, the second region, the third region and the second region Four areas, wherein the first area and the second area correspond to two images in the first set of images, and the third area and the fourth area correspond to the second set of images The two images in the image correspond to each other;
  • a calculation unit 703 configured to calculate a first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area;
  • the transceiver unit 704 is configured to send a first coordinate recording instruction to the first device if the average difference value of the first pixel reaches a first preset value, so that the first device records the first coordinate position.
  • the first position coordinates are the coordinates of the position of the second device when taking the first group of images; if the second pixel average difference reaches a second preset value, a second coordinate recording instruction is sent to the first device, so that the first device records the second position coordinates, and determines according to the first position coordinates, the second position coordinates, and the bonding angle of the corresponding grating
  • the width of the viewpoint corresponding to the first device, and the second position coordinates are the coordinates of the position of the second device when capturing the second group of images.

Abstract

The present application provides a method for determining the width of a viewpoint, comprising: displaying a target image such that a second device photographs the target image in real time to obtain a first group of images and a second group of images, respectively segmenting two images in the first group of images and two images in the second group of images to obtain a first area, a second area, a third area, and a fourth area, and calculating a first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area; if the first pixel average difference reaches a first preset value, recording first position coordinates when the second device photographs the first group of images; if the second pixel average difference reaches a second preset value, recording second position coordinates when the second device photographs the second group of images; and determining the width of a viewpoint corresponding to a first device according to the first position coordinates, the second position coordinates, and an attachment angle of a grating corresponding to the first device.

Description

视点宽度的确定方法及其装置Method and device for determining viewpoint width 技术领域technical field
本申请属于裸眼3D领域,特别涉及一种视点宽度的确定方法及其装置。The present application belongs to the field of naked-eye 3D, and in particular relates to a method for determining the width of a viewpoint and a device thereof.
背景技术Background technique
裸眼3D,Autostereoscopy的简称,裸眼3D是对不借助偏振光眼镜等外部工具,实现立体视觉效果的技术的统称。Naked-eye 3D, short for Autostereoscopy, is a general term for technologies that achieve stereoscopic visual effects without the help of external tools such as polarized glasses.
在带有人眼追踪的裸眼3D系统中,设备通过前置相机采集图像并追踪到人眼的位置,然后计算出人眼当前位置对应的视点,在通过设置的前置相机采集图像并追踪人眼位置的过程中需要确定裸眼3D系统中每一个视点的宽度。In the naked-eye 3D system with human eye tracking, the device collects images through the front camera and tracks the position of the human eye, then calculates the viewpoint corresponding to the current position of the human eye, collects images through the set front camera and tracks the human eye The process of positioning needs to determine the width of each viewpoint in the naked-eye 3D system.
目前来说,主要通过光学设计推导出视点的宽度,但在实际使用中往往是一款光栅用在多款第三方设备上,无法确切的获取到该设备的屏幕光学参数,比如玻璃厚度、光学胶厚度、装配缝隙大小等,进而导致无法准确的确定出视点的宽度。At present, the width of the viewpoint is mainly deduced through optical design, but in actual use, a grating is often used on a variety of third-party devices, and it is impossible to obtain the exact optical parameters of the screen of the device, such as glass thickness, optical The thickness of the glue, the size of the assembly gap, etc., which leads to the inability to accurately determine the width of the viewpoint.
发明内容Contents of the invention
本申请的目的在于提供一种视点宽度的确定方法及其装置,可以在设备的屏幕光学参数未知的情况下,快速确定设备所对应视点的宽度,进而通过视点宽度对设备显示的3D图像或3D视频进行调整,提高用户的观看体验。The purpose of this application is to provide a method and device for determining the width of the viewpoint, which can quickly determine the width of the viewpoint corresponding to the device when the optical parameters of the screen of the device are unknown, and then use the width of the viewpoint to determine the 3D image or 3D image displayed by the device. The video is adjusted to improve the viewing experience of the user.
本申请实施例第一方面提供了一种视点宽度的确定方法,包括:The first aspect of the embodiment of the present application provides a method for determining the width of a viewpoint, including:
第一设备以立体模式显示目标图像,以使得第二设备对所述目标图像进行实时拍摄,以得到第一组图像和第二组图像,并分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到第一区域、第二区域、第三区域和第四区域,且计算所述第一区域与所述第二区域的第一像素平均差值以及所 述第三区域与所述第四区域的第二像素平均差值,其中,所述第一组图像和所述第二组图像为所述第二设备在不同位置通过设置于所述第二设备的两个摄像头同时对所述目标图像进行拍摄得到,且所述第二设备的两个摄像头处于同一水平线,所述第二设备的两个摄像头的中心间距为预设距离,所述第一区域和所述第二区域与所述第一组图像中的两个图像相对应,所述第三区域和所述第四区域与所述第二组图像中的两个图像相对应;The first device displays the target image in a stereoscopic mode, so that the second device shoots the target image in real time to obtain a first group of images and a second group of images, and respectively The two images and the two images in the second group of images are divided to obtain the first area, the second area, the third area and the fourth area, and the calculation of the first area and the second area The first pixel average difference and the second pixel average difference between the third area and the fourth area, wherein the first group of images and the second group of images are the second device in The different positions are obtained by shooting the target image at the same time with two cameras arranged on the second device, and the two cameras of the second device are on the same horizontal line, and the center distance between the two cameras of the second device is is a preset distance, the first area and the second area correspond to two images in the first group of images, and the third area and the fourth area correspond to the second group of images The two images in the image correspond to each other;
若所述第一设备接收到所述第二设备在所述第一像素平均差值达到第一预设值时发送的第一坐标记录指令,则所述第一设备记录所述第二设备拍摄所述第一组图像时的第一位置坐标;If the first device receives the first coordinate recording instruction sent by the second device when the average difference of the first pixels reaches the first preset value, the first device records the photograph taken by the second device The first position coordinates of the first group of images;
若所述第一设备接收到所述第二设备在所述第二像素平均差值达到第二预设值时发送的第二坐标记录指令,则所述第一设备记录记录所述第二设备拍摄所述第二组图像时的第二位置坐标;If the first device receives the second coordinate recording instruction sent by the second device when the second pixel average difference reaches a second preset value, the first device records the the second location coordinates when the second group of images was taken;
所述第一设备根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应的光栅的贴合角度确定所述第一设备所对应的视点的宽度。The first device determines the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
本申请实施例第二方面提供了一种视点宽度的确定方法,包括:The second aspect of the embodiment of the present application provides a method for determining the width of a viewpoint, including:
第二设备对第一设备以立体模式显示的目标图像进行实时拍摄,以得到第一组图像和第二组图像,所述第一组图像和所述第二组图像为所述第二设备在不同位置通过设置于所述第二设备的两个摄像头同时对所述目标图像进行拍摄得到,且所述第二设备的两个摄像头处于同一水平线,所述第二设备的两个摄像头的中心间距为预设距离;The second device shoots the target image displayed by the first device in stereo mode in real time to obtain a first group of images and a second group of images, the first group of images and the second group of images are the The second device obtains the target image by shooting the target image simultaneously with two cameras arranged on the second device at different positions, and the two cameras of the second device are at the same horizontal line, and the two cameras of the second device The distance between the centers of the cameras is the preset distance;
所述第二设备分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到第一区域、第二区域、第三区域和第四区域,其中,所述第一区域和所述第二区域与所述第一组图像中的两个图像相对应,所述第三区域和所述第四区域与所述第二组图像中的两个图像相对应;The second device respectively segments two images in the first group of images and two images in the second group of images to obtain a first region, a second region, a third region and a fourth region area, wherein the first area and the second area correspond to two images in the first set of images, and the third area and the fourth area correspond to the second set of images The two images in correspond to;
所述第二设备分别计算所述第一区域与所述第二区域的第一像素平均差值以及所述第三区域与所述第四区域的第二像素平均 差值;The second device respectively calculates a first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area;
若所述第一像素平均差值达到第一预设值,则所述第二设备发送第一坐标记录指令至所述第一设备,以使得所述第一设备记录第一坐标位置,所述第一位置坐标为所述第二设备在拍摄所述第一组图像时所处位置的坐标;If the first pixel average difference reaches a first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records the first coordinate position, and the The first position coordinates are the coordinates of the position of the second device when capturing the first group of images;
若所述第二像素平均差值达到第二预设值,则所述第二设备发送第二坐标记录指令至所述第一设备,以使得所述第一设备记录第二位置坐标,并根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应的光栅的贴合角度确定所述第一设备所对应的视点的宽度,所述第二位置坐标为所述第二设备在拍摄所述第二组图像时所处位置的坐标。If the second pixel average difference reaches a second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records the second position coordinates, and according to The lamination angle of the grating corresponding to the first position coordinate, the second position coordinate and the first device determines the width of the viewpoint corresponding to the first device, and the second position coordinate is the width of the viewpoint corresponding to the first device. The coordinates of the location of the second device when capturing the second group of images.
本申请实施例第三方面提供了一种视点宽度确定装置,包括第一设备,所述第一设备包括:The third aspect of the embodiment of the present application provides an apparatus for determining a viewpoint width, including a first device, and the first device includes:
显示单元,用于以立体模式显示目标图像,以使得第二设备对所述目标图像进行实时拍摄,以得到第一组图像和第二组图像,并分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到第一区域、第二区域、第三区域和第四区域,且计算所述第一区域与所述第二区域的第一像素平均差值以及所述第三区域与所述第四区域的第二像素平均差值,其中,所述第一组图像和所述第二组图像为所述第二设备在不同位置通过设置于所述第二设备的两个摄像头同时对所述目标图像进行拍摄得到,且所述第二设备的两个摄像头处于同一水平线,所述第二设备的两个摄像头的中心间距为预设距离,所述第一区域和所述第二区域与所述第一组图像中的两个图像相对应,所述第三区域和所述第四区域与所述第二组图像中的两个图像相对应;a display unit, configured to display the target image in a stereoscopic mode, so that the second device shoots the target image in real time to obtain a first group of images and a second group of images, and respectively compares the first group of images The two images in the image and the two images in the second group of images are divided to obtain the first area, the second area, the third area and the fourth area, and the calculation of the first area and the second The first pixel average difference value of the area and the second pixel average difference value between the third area and the fourth area, wherein the first group of images and the second group of images are the second The device is obtained by simultaneously shooting the target image with two cameras installed on the second device at different positions, and the two cameras of the second device are on the same horizontal line, and the two cameras of the second device The distance between centers is a preset distance, the first area and the second area correspond to two images in the first group of images, the third area and the fourth area correspond to the second The two images in the group image correspond;
记录单元,用于若所述第一设备接收到所述第二设备在所述第一像素平均差值达到第一预设值时发送的第一坐标记录指令,则所述第一设备记录所述第二设备拍摄所述第一组图像时的第一位置坐标;若所述第一设备接收到所述第二设在在所述第二像素平均差值达到第二预设值时发送的第二坐标记录指令,则记录记录所述第二设备拍摄所述第二组图像时的第二位置坐标;A recording unit, configured to record the first coordinate recording instruction sent by the second device when the first pixel average difference reaches a first preset value, if the first device receives the first coordinate recording instruction. The first position coordinates when the second device captures the first group of images; if the first device receives the second set and sends the second pixel average difference value to a second preset value the second coordinate recording instruction, record and record the second position coordinates when the second device captures the second group of images;
确定单元,用于根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应的光栅的贴合角度确定所述第一设备所对应的视点的宽度。The determining unit is configured to determine the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
相对于相关技术,本申请提供的实施例中,本申请提供的实施例中,在确定所述第一设备的视点宽度时,所述第二设备可以在不同位置对所述第一设备进行拍摄得到多个图像,并对多个图像进行分割得到两个图像区域,并计算所述两个图像区域的像素平均差值,进而在所述像素平均差值达到预设值时,由所述第一设备记录达到预设值时所述第二设备的位置坐标,之后所述第一设备根据所述第二设备处在不同位置的位置坐标和光栅的贴合角度来计算所述第一设备所对应视点的宽度,由此可以在所述第一设备的屏幕光学参数未知的情况下快速确定所述第一设备所对应视点的宽度,进而通过所述视点宽度对所述第一设备显示的立体图像或立体视频进行调整,提高用户观看体验。Compared with related technologies, in the embodiments provided in this application, in the embodiments provided in this application, when determining the viewpoint width of the first device, the second device can shoot the first device at different positions Obtaining a plurality of images, and dividing the plurality of images to obtain two image regions, and calculating the pixel average difference of the two image regions, and then when the pixel average difference reaches a preset value, the first A device records the position coordinates of the second device when the preset value is reached, and then the first device calculates the position coordinates of the second device at different positions and the bonding angle of the grating. The width of the corresponding viewpoint, so that when the optical parameters of the screen of the first device are unknown, the width of the corresponding viewpoint of the first device can be quickly determined, and then the stereoscopic display displayed by the first device can be made based on the width of the viewpoint. Image or stereoscopic video can be adjusted to improve user viewing experience.
附图说明Description of drawings
图1为相关技术的位置指示器构成的原理框图;Fig. 1 is the functional block diagram that the position indicator of related art constitutes;
图2为本申请提供的位置检测系统的结构框图;Fig. 2 is the structural block diagram of the position detection system provided by the present application;
图3为图2所示按键检测电路的结构框图;Fig. 3 is a structural block diagram of the button detection circuit shown in Fig. 2;
图4为图2所示压力检测电路的结构框图;Fig. 4 is a structural block diagram of the pressure detection circuit shown in Fig. 2;
图5为数字手写笔的数据通信传输基准示意图;5 is a schematic diagram of a data communication transmission benchmark of a digital stylus;
图6为数字手写笔与手写板的数据通信示意图。Fig. 6 is a schematic diagram of data communication between a digital stylus and a tablet.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。The following will clearly and completely describe the technical solutions in the embodiments of the application with reference to the drawings in the embodiments of the application. Apparently, the described embodiments are only some of the embodiments of the application, not all of them.
请参阅图1,为本申请实施例提供的视点宽度的确定方法的一个实施例示意图,包括:Please refer to FIG. 1, which is a schematic diagram of an embodiment of a method for determining the width of a viewpoint provided by the embodiment of the present application, including:
101、第一设备以立体模式显示的目标图像,以使得第二设备 对所述第一设备以立体模式显示的目标图像进行实时拍摄,以得到第一组图像和第二组图像,并分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到第一区域、第二区域、第三区域和第四区域,且计算所述第一区域与所述第二区域的第一像素平均差值以及所述第三区域与所述第四区域的第二像素平均差值。101. The target image displayed by the first device in stereo mode, so that the second device shoots the target image displayed by the first device in stereo mode in real time to obtain a first group of images and a second group of images, and respectively segment the two images in the first group of images and the two images in the second group of images to obtain the first region, the second region, the third region and the fourth region, and calculate the A first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area.
本实施例中,所述第一组图像和所述第二组图像为所述第二设备在不同位置通过设置于所述第二设备的两个摄像头同时对所述目标图像进行拍摄得到,且所述第二设备的两个摄像头处于同一水平线,所述第二设备的两个摄像头的中心间距为预设距离,所述第一区域和所述第二区域与所述第一组图像中的两个图像相对应,所述第三区域和所述第四区域与所述第二组图像中的两个图像相对应,所述第二设备为带有图像采集功能和通信功能的任意终端设备,所述目标图像为半黑半彩色的图像,半彩色中的彩色指的是白色、红色、绿色和黄色等可视的颜色。可以理解的是,所述第二设备在对所述目标图像进行拍摄得到所述第一组图像和所述第二组图像时,所述第一组图像和所述第二组图像中的每个图像可能不会只包含所述目标图像,有可能还会包括有其他的内容,因此此处需要确定的是屏幕区域的像素平均差值,而不是每个图像的像素平均差值,该屏幕区域为所述目标图像在所述第一设备的屏幕中的显示区域。另外,所述第一设备在显示所述目标图像的同时,还可以在所述第一设备所对应的屏幕上实时显示所述第二设备的位置坐标,或者直接显示所述第二设备的摄像头的位置坐标。In this embodiment, the first group of images and the second group of images are obtained by the second device shooting the target image at different positions through two cameras installed on the second device at the same time , and the two cameras of the second device are on the same horizontal line, the distance between the centers of the two cameras of the second device is a preset distance, the first area and the second area are different from the first group of images corresponding to the two images in the image, the third area and the fourth area correspond to the two images in the second group of images, and the second device has an image acquisition function and a communication function For any terminal device, the target image is a half-black half-color image, and the color in the half-color refers to visible colors such as white, red, green, and yellow. It can be understood that when the second device captures the target image to obtain the first group of images and the second group of images, the first group of images and the second group of images Each image in the image may not only contain the target image, but may also include other content, so what needs to be determined here is the average pixel difference of the screen area, not the average pixel difference of each image value, the screen area is the display area of the target image on the screen of the first device. In addition, while displaying the target image, the first device may also display the location coordinates of the second device in real time on the screen corresponding to the first device, or directly display the location coordinates of the camera of the second device. location coordinates.
102、若所述第一设备接收到所述第二设备在所述第一像素平均差值达到第一预设值时发送的第一坐标记录指令,则所述第一设备记录所述第二设备拍摄所述第一组图像时所处位置的第一位置坐标。102. If the first device receives the first coordinate recording instruction sent by the second device when the first pixel average difference reaches a first preset value, the first device records the second The first location coordinates of the location where the device is when capturing the first group of images.
本实施例中,所述第二设备在对所述第一组图像进行分割得到所述第一区域和所述第二区域,并计算所述第一区域与所述第二区域的所述第一像素平均差值之后,可以判断所述第一像素平 均差值是否达到所述第一预设值,若所述第一像素平均差值达到所述第一预设值,则所述第二设备可以发送所述第一坐标记录指令,所述第一设备根据所述第一坐标记录指令记录所述第二设备拍摄所述第一组图像时所处位置的所述第一位置坐标。In this embodiment, the second device divides the first group of images to obtain the first area and the second area, and calculates the After the first pixel average difference, it can be judged whether the first pixel average difference reaches the first preset value, and if the first pixel average difference reaches the first preset value, then the first The second device may send the first coordinate recording instruction, and the first device records the first position coordinates of the position where the second device is at when the first group of images is taken according to the first coordinate recording instruction .
103、若所述第一设备接收到所述第二设备在所述第二像素平均差值达到第二预设值时发送的第二坐标记录指令时,则所述第一设备记录所述第二设备拍摄所述第二组图像时所处位置的第二位置坐标。103. If the first device receives the second coordinate recording instruction sent by the second device when the second pixel average difference reaches a second preset value, the first device records the first The second location coordinates of the location where the second device is when capturing the second group of images.
本实施例中,所述第二设备在对所述第二组图像进行分割得到所述第三区域和所述第四区域,并计算所述第三区域和所述第四区域的所述第二像素平均差值之后,可以判断所述第二像素平均差值是否达到所述第二预设值,若所述第二像素平均差值达到所述第二预设值,则所述第二设备可以发送所述第二坐标记录指令,所述第一设备根据接所述第二坐标记录指令记录所述第二设备拍摄所述第二组图像时所处位置的所述第二位置坐标。In this embodiment, the second device obtains the third area and the fourth area by segmenting the second group of images, and calculates the After the second pixel average difference, it can be judged whether the second pixel average difference reaches the second preset value, and if the second pixel average difference reaches the second preset value, the first The second device may send the second coordinate recording instruction, and the first device records the second position where the second device is at when the second group of images is taken according to the second coordinate recording instruction. coordinate.
104、所述第一设备根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应光栅的贴合角度确定所述第一设备所对应视点的宽度。104. The first device determines the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
本实施例中,所述第一设备在记录所述第一位置坐标和所述第二位置坐标之后,可以获取到所述第一设备所对应光栅的贴合角度(该贴合角度为所述第一设备上贴的3D膜的光栅的贴合角度,另外,此处并不限定获取光栅的贴合角度的方式,例如可以由用户输入),并根据所述第一位置坐标、所述第二位置坐标和所述贴合角度确定所述第一设备所对应视点的宽度。In this embodiment, after the first device records the first position coordinates and the second position coordinates, it can acquire the bonding angle of the corresponding grating of the first device (the bonding angle is the The lamination angle of the grating of the 3D film pasted on the first device, in addition, here is not limited to the way of obtaining the lamination angle of the grating, for example, can be input by the user), and according to the first position coordinates, the second The two position coordinates and the bonding angle determine the width of the viewpoint corresponding to the first device.
一个实施例中,所述第一设备根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应光栅的贴合角度确定所述第一设备所对应视点的宽度包括:In an embodiment, the first device determining the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device includes:
所述第一设备根据所述贴合角度和所述第一位置坐标确定所述第二设备的第一位置,所述第一位置为所述第一像素平均差值达到所述第一预设值时所述第二设备所处的位置;The first device determines the first position of the second device according to the bonding angle and the first position coordinates, and the first position is when the first pixel average difference reaches the first preset the position of the second device at the time of value;
所述第一设备根据所述贴合角度和所述第二位置坐标确定所 述第二设备的第二位置,所述第二位置为所述第二像素平均差值达到所述第二预设值时所述第二设备所处的位置;The first device determines a second position of the second device according to the bonding angle and the second position coordinates, and the second position is when the average difference of the second pixels reaches the second preset the position of the second device at the time of value;
所述第一设备根据所述第一位置和所述第二位置确定所述视点的宽度。The first device determines a width of the viewpoint based on the first position and the second position.
本实施例中,所述第一设备可以基于所述贴合角度和所述第一位置坐标通过如下公式计算所述第一位置,所述第一位置为所述第一像素平均差值达到所述第一预设值时所述第二设备拍摄所述第一组图像所处的位置:In this embodiment, the first device may calculate the first position by the following formula based on the bonding angle and the first position coordinates, the first position is when the average difference of the first pixels reaches the The position where the second device captures the first group of images when the first preset value is mentioned:
X 0′=x 0+(y 0-y)*tan(a) X 0 ′=x 0 +(y 0 -y)*tan(a)
其中,X 0′为所述第一位置,所述第一位置坐标为(x 0,y 0),y为预设的常量,a为所述贴合角度; Wherein, X 0 ′ is the first position, the coordinates of the first position are (x 0 , y 0 ), y is a preset constant, and a is the bonding angle;
所述第一设备可以基于所述贴合角度和所述第二位置坐标通过如下公式计算所述第二位置,所述第二位置为所述第二像素平均差值达到所述第二预设值时所述第二设备拍摄所述第二组图像所处的位置:The first device may calculate the second position by the following formula based on the bonding angle and the second position coordinates, the second position is when the average difference of the second pixels reaches the second preset The position where the second device captures the second group of images when the value is:
X 1′=x 1+(y 1-y)*tan(a); X 1 '=x 1 +(y 1 -y)*tan(a);
其中,X 1′为所述第二位置,所述第二位置坐标为(x 1,y 1); Wherein, X 1 ′ is the second position, and the coordinates of the second position are (x 1 , y 1 );
所述第一设备在根据公式计算得到所述第一位置和所述第二位置之后,可以基于所述第一位置和所述第二位置通过如下公式计算所述第一设备所对应视点的宽度:After the first device calculates the first position and the second position according to the formula, it can calculate the width of the viewpoint corresponding to the first device by the following formula based on the first position and the second position :
VW=abs(X 0′-X 1′); VW=abs(X 0 ′-X 1 ′);
其中,VW为所述第一设备所对应视点的宽度,abs为绝对值函数。Wherein, VW is the width of the viewpoint corresponding to the first device, and abs is an absolute value function.
下面结合图2对视点的宽度计算进行说明,图2为本申请实 施例提供的视点的宽度计算示意图,其中,201为所述第一位置坐标(x 0,y 0),202为所述第二位置坐标(x 1,y 1),203为预设的常量y在坐标系中的坐标(可以理解的是,所述预设的常量y可以设置为屏幕区域宽度的一半,当然也还可以根据实际情况进行设置,具体不做限定),以计算所述第一位置X 0′为例进行说明,在计算所述第一位置X 0′时,光栅的所述贴合角度a为已知的,在得到所述第一位置坐标(x 0,y 0)之后,将预设的常量203换算至与所述第一位置坐标的Y轴方向相同的方向,之后即可以通过公式X 0′=x 0+(y 0-y)*tan(a)计算所述第一位置,同理可以计算得到所述第二位置,之后通过公式VW=abs(X 0′-X 1′)计算得到所述第一位置和所述第二位置的差值绝对值,也即所述第一设备所对应视点的宽度。 The calculation of the width of the viewpoint will be described below in conjunction with FIG. 2 . FIG. 2 is a schematic diagram of the calculation of the width of the viewpoint provided by the embodiment of the present application, wherein 201 is the first position coordinate (x 0 , y 0 ), and 202 is the first position coordinate (x 0 , y 0 ). Two position coordinates (x 1 , y 1 ), 203 is the coordinate of the preset constant y in the coordinate system (it can be understood that the preset constant y can be set to half the width of the screen area, and of course it can also Set according to the actual situation, there is no specific limitation), taking the calculation of the first position X 0 ′ as an example for illustration, when calculating the first position X 0 ′, the lamination angle a of the grating is known After obtaining the first position coordinates (x 0 , y 0 ), convert the preset constant 203 to the same direction as the Y-axis direction of the first position coordinates, and then use the formula X 0 ′ =x 0 +(y 0 -y)*tan(a) to calculate the first position, similarly the second position can be calculated, and then calculated by the formula VW=abs(X 0 ′-X 1 ′) The absolute value of the difference between the first position and the second position, that is, the width of the viewpoint corresponding to the first device.
需要说明的是,所述第一设备在得到所述第一设备所对应视点的宽度之后,可以基于所述视点的宽度对所述第一设备以立体模式运行时显示的3D图像或播放的3D视频进行调整,具体的所述第一设备在基于所述视点的宽度对所述第一设备以立体模式运行时显示的3D图像或播放的3D视频进行调整时,可以获取所述第一设备所对应光栅的宽度,之后,根据所述光栅的宽度和所述视点的宽度确定所述第一设备所对应视点的排列布局,并根据所述视点的排列布局以及用户的人眼位置变化对所述第一设备在立体模式下运行时显示的立体图像进行调整。也就是说,在得到所 述视点的宽度之后,由于所述第一设备的光栅的宽度是已知的,由此即可以推算出所述第一设备屏幕上贴的3D膜的光栅的排列布局,之后即可以根据人眼位置变化对所述第一设备在立体模式下运行时显示的3D图像或播放的3D视频进行调整,为用户提供更好的3D显示效果。It should be noted that after the first device obtains the width of the viewpoint corresponding to the first device, based on the width of the viewpoint, the 3D image displayed or the played 3D image when the first device operates in stereo mode The video is adjusted. Specifically, when the first device adjusts the 3D image displayed or the 3D video played when the first device operates in stereoscopic mode based on the width of the viewpoint, the first device may obtain the Corresponding to the width of the grating, then, according to the width of the grating and the width of the viewpoint, determine the arrangement and layout of the viewpoint corresponding to the first device, and adjust the The stereoscopic images displayed by the first device are adjusted while operating in the stereoscopic mode. That is to say, after obtaining the width of the viewpoint, since the width of the grating of the first device is known, the arrangement of the grating of the 3D film pasted on the screen of the first device can be deduced , and then the 3D image displayed or the 3D video played when the first device operates in the stereo mode can be adjusted according to the change of the position of the human eyes, so as to provide users with a better 3D display effect.
还需要说明的是,上述以所述第一设备位置不变,所述第二设备变换自身所处的位置,并通过所述第一设备对所述第二设备的摄像头的位置坐标进行追踪,并记录所述第二设备变换位置时的位置坐标,当然也还可以是其他的方式,例如所述第二设备的位置不变,所述第一设备变换位置来记录所述第二设备的摄像头的位置坐标,具体不做限定,只要能记录所述第二设备在不同位置对所述第一设备拍摄图像时的位置坐标即可。可以理解的是,当检测者位置不变,通过目标设备变换位置来记录检测者的人眼坐标时,具体的执行过程如下:It should also be noted that the position of the first device remains unchanged, the second device changes its own position, and the position coordinates of the camera of the second device are tracked by the first device, And record the position coordinates of the second device when changing its position, of course, it can also be in other ways, for example, the position of the second device remains unchanged, and the first device changes its position to record the camera of the second device The location coordinates are not specifically limited, as long as the location coordinates when the second device takes images of the first device at different positions can be recorded. It can be understood that when the position of the detector remains unchanged and the coordinates of the human eyes of the detector are recorded by changing the position of the target device, the specific execution process is as follows:
当所述第一设备以立体模式显示所述目标图像时,所述第二设备对所述第一设备进行拍摄,所述第一设备的屏幕上显示所述第二设备的位置坐标(当然也可以显示所述第二设备的摄像头的位置坐标,具体不限定),所述第一设备调整位置直至接收到所述第二设备的坐标记录指令,并记录当前位置所述第二设备的位置坐标,所述坐标记录指令由所述第二设备在对拍摄所述第一设备得到的图像进行分析,得到对应的像素平均差值,且所述像素平均差值达到所述第一预设值或所述第二预设值时发出的;之后继续调整位置直至再次接收到所述第二设备发送的坐标记录指令,并记录当前位置所述第二设备的位置坐标,所述坐标记录指令由所述第二设备在对拍摄所述第一设备得到的图像进行分析,得到对应的像素平均差值,且所述像素平均差值达到所述第一预设值或所述第二预设值时发出的;由此可以得到两个不同位置的位置坐标,根据两个不同位置的位置坐标和所述贴合角度计算所 述视点的宽度。When the first device displays the target image in stereoscopic mode, the second device takes a picture of the first device, and the screen of the first device displays the position coordinates of the second device (of course The position coordinates of the camera of the second device may be displayed, not specifically limited), the first device adjusts its position until it receives the coordinate recording instruction of the second device, and records the position coordinates of the second device at the current position , the coordinate recording instruction is used by the second device to analyze the image obtained by shooting the first device to obtain the corresponding pixel average difference, and the pixel average difference reaches the first preset value or The second preset value is issued; then continue to adjust the position until the coordinate recording instruction sent by the second device is received again, and record the position coordinates of the second device at the current position, and the coordinate recording instruction is issued by the second device When the second device analyzes the image obtained by shooting the first device to obtain the corresponding pixel average difference value, and the pixel average difference value reaches the first preset value or the second preset value Sent; thus, the position coordinates of two different positions can be obtained, and the width of the viewpoint is calculated according to the position coordinates of the two different positions and the fitting angle.
综上所述,可以看出,本申请提供的实施例中,在确定所述第一设备的视点宽度时,所述第二设备可以在不同位置对所述第一设备进行拍摄得到多个图像,并对多个图像进行分割得到两个图像区域,并计算所述两个图像区域的像素平均差值,进而在所述像素平均差值达到预设值时,由所述第一设备记录达到预设值时所述第二设备的位置坐标,之后所述第一设备根据所述第二设备处在不同位置的位置坐标和光栅的贴合角度来计算所述第一设备所对应视点的宽度,由此可以在所述第一设备的屏幕光学参数未知的情况下快速确定所述第一设备所对应视点的宽度,进而通过所述视点宽度对所述第一设备显示的立体图像或立体视频进行调整,提高用户观看体验。To sum up, it can be seen that in the embodiments provided by this application, when determining the viewpoint width of the first device, the second device can capture multiple images of the first device at different positions , and segment a plurality of images to obtain two image regions, and calculate the pixel average difference between the two image regions, and then when the pixel average difference reaches a preset value, the first device records the reached The preset value is the position coordinates of the second device, and then the first device calculates the width of the viewpoint corresponding to the first device according to the position coordinates of the second device at different positions and the lamination angle of the grating , so that when the optical parameters of the screen of the first device are unknown, the width of the viewpoint corresponding to the first device can be quickly determined, and then the stereoscopic image or stereoscopic video displayed by the first device can be adjusted according to the width of the viewpoint. Make adjustments to improve user viewing experience.
请参阅图3,图3为本申请实施例提供的应用场景示意图,图3以所述第一设备的位置固定不变,所述第二设备位置变化,并由所述第一设备记录所述第二设备拍摄的图像的所述像素平均差值达到所述预设值的情况下所述第二设备所处位置的位置坐标,如图3所示,当需要确定所述第一设备301上设置的3D膜的视点宽度时,所述第一设备301以立体模式显示所述目标图像,所述第二设备通过设置于所述第二设备上的两个摄像头在不同位置对所述第一设备以立体模式显示的所述目标图像进行拍摄,得到对应的图像,并对所述图像进行分割,得到两个图像区域,并计算所述两个图像区域的像素平均差值,之后判断所述像素平均差值是否达到所述第一预设值或所述第二预设值,若所述像素平均差值达到所述第一预设值或所述第二预设值,在向所述第一设备301发送所述坐标记录指令,所述第一设备301在接收到所述坐标记录指令之后,所述第一设备301记录所述位置坐标,如图3中所述第二设备处于302位置时,拍摄得到的所述第一设备301的图像的所述第一像素平均差值达到所述第一预设值,此时所述第一设备301在收到所述第一坐标记录指令时,所述第一设备301可 以记录所述第二设备处于所述302位置的所述第一位置坐标;之后所述第二设备再次变换位置,并对所述第一设备301进行再次拍摄,得到对应的图像,并对所述图像进行分割得到两个图像区域,并计算所述两个图像区域的像素平均差值,之后判断所述像素平均差值是否达到所述第二预设值,若所述第二像素平均差值达到所述第二预设值,则所述第二设备向所述第一设备301发送所述第二坐标记录指令,此时所述第一设备301根据所述第二坐标记录指令记录所述第二位置坐标,如图3中所述第二设备处于303位置时拍摄所述第一设备301的图像的所述第二像素平均差值达到所述第二预设值,此时所述第二设备向所述第一设备301发送记录所述第二位置坐标指令,所述第一设备301根据第二坐标记录指令记录所述第二设备处于所述303位置时的所述第二位置坐标,之后所述第一设备301可以根据所述第二设备处于所述302位置时的所述第一位置坐标、所述第二设备处于所述303位置时的所述第二位置坐标和所述第一设备301所对应光栅的所述贴合角度计算所述第一设备301所对应视点的宽度,进而根据所述视点的宽度对所述第一设备301在立体模式下显示的3D图像或3D视频进行调整。由此,可以在所述第一设备的屏幕光学参数未知的情况下,快速确定所述第一设备301所对应视点的宽度,进而通过所述视点的宽度对所述第一设备301显示的3D图像或3D视频进行调整,提高用户观看体验。Please refer to Figure 3. Figure 3 is a schematic diagram of the application scenario provided by the embodiment of the present application. In Figure 3, the position of the first device is fixed, the position of the second device changes, and the first device records the When the pixel average difference value of the image taken by the second device reaches the preset value, the position coordinates of the position of the second device, as shown in FIG. 3 , when it is necessary to determine When the viewing point width of the 3D film is set, the first device 301 displays the target image in stereoscopic mode, and the second device monitors the first device at different positions through two cameras arranged on the second device The device shoots the target image displayed in a stereo mode to obtain the corresponding image, and divides the image to obtain two image areas, and calculates the pixel average difference between the two image areas, and then judges the Whether the pixel average difference reaches the first preset value or the second preset value, if the pixel average difference reaches the first preset value or the second preset value, the The first device 301 sends the coordinate recording instruction, and after the first device 301 receives the coordinate recording instruction, the first device 301 records the position coordinates, as shown in FIG. 3 , the second device is at 302 position, the first pixel average difference value of the image captured by the first device 301 reaches the first preset value, and at this time, the first device 301 receives the first coordinate recording instruction , the first device 301 may record the coordinates of the first position where the second device is at the position 302; then the second device changes its position again and takes another shot of the first device 301, Obtaining the corresponding image, and dividing the image to obtain two image regions, and calculating the pixel average difference of the two image regions, and then determining whether the pixel average difference reaches the second preset value, If the second pixel average difference reaches the second preset value, the second device sends the second coordinate recording instruction to the first device 301, and at this time, the first device 301 The second coordinate recording instruction records the coordinates of the second position. As shown in FIG. preset value, at this time, the second device sends an instruction to record the coordinates of the second position to the first device 301, and the first device 301 records that the second device is in the 303 position according to the second coordinate recording instruction. The second position coordinates when the second device is in the position, and then the first device 301 can use the first position coordinates when the second device is in the position 302, the coordinates when the second device is in the position 303 Calculate the width of the viewpoint corresponding to the first device 301 by the lamination angle of the second position coordinates and the grating corresponding to the first device 301, and then calculate the width of the first device 301 according to the width of the viewpoint. 3D images or 3D videos displayed in stereoscopic mode can be adjusted. Therefore, when the optical parameters of the screen of the first device are unknown, the width of the viewpoint corresponding to the first device 301 can be quickly determined, and then the 3D displayed by the first device 301 can be adjusted according to the width of the viewpoint. Image or 3D video can be adjusted to improve user viewing experience.
需要说明的是,所述第二设备通过设置于所述第二设备上的两个摄像头在一个位置对所述第一设备进行拍摄得到所述位置所拍摄的所述第一设备的一组图像之后,直接对所述图像进行分割,得到图像区域,并计算所述图像区域对应的像素平均差值,并判断所述像素平均差值是否达到所述第一预设值或所述第二预设值,若均未达到,则变换该所述第二设备的位置重复执行上述步骤,直至找到某个位置拍摄的所述第一设备的图像的所述像素平均差值达到所述第一预设值,并在达到所述第一预设值时,向所述第一设备发送所述坐标记录指令,以使得所述第一设备返回对 应的位置坐标,之后继续调整位置执行上述步骤,直至找到另一个位置拍摄的所述第一设备的图像的所述像素平均差值达到所述第二预设值,并从所述第一设备获取所述两个位置的所述位置坐标,作为所述第一位置坐标和所述第二位置坐标。It should be noted that, the second device shoots the first device at one position through two cameras installed on the second device to obtain a set of pictures of the first device captured at the position After imaging, the image is directly segmented to obtain the image area, and the pixel average difference corresponding to the image area is calculated, and it is judged whether the pixel average difference reaches the first preset value or the second preset value. If none of the preset values are reached, change the position of the second device and repeat the above steps until the pixel average difference of the image of the first device taken at a certain position reaches the first preset value, and when the first preset value is reached, send the coordinate recording instruction to the first device, so that the first device returns to the corresponding position coordinates, and then continue to adjust the position to perform the above steps, until the pixel average difference value of the image of the first device taken at another location reaches the second preset value, and obtain the location coordinates of the two locations from the first device, as The first location coordinates and the second location coordinates.
上面结合图1从第一设备的角度对本申请实施例提供的视点宽度的确定方法进行说明,下面结合图4从第二设备的角度对本申请实施例提供的视点宽度的确定方法进行说明。The method for determining the viewpoint width provided by the embodiment of the present application is described above from the perspective of the first device in conjunction with FIG. 1 , and the method for determining the viewpoint width provided in the embodiment of the present application is described below from the perspective of the second device in conjunction with FIG. 4 .
请结合参阅图4,图4为本申请实施例提供的视点宽度的确定方法的另一实施例示意图,包括:Please refer to FIG. 4 in conjunction with FIG. 4. FIG. 4 is a schematic diagram of another embodiment of the method for determining the viewpoint width provided by the embodiment of the present application, including:
401、第二设备对第一设备以立体模式显示的目标图像进行实时拍摄,以得到第一组图像和第二组图像。401. The second device shoots in real time the target image displayed by the first device in a stereo mode, so as to obtain a first group of images and a second group of images.
本实施例中,所述第一组图像和所述第二组图像为所述第二设备在不同的位置通过设置于所述第二设备的两个摄像头同时对所述目标图像进行拍摄得到,且所述第二设备的两个摄像头处于同一水平线,所述第二设备的两个摄像头的中心间距为预设距离,例如65mm,所述第二设备为带有图像采集功能和通信功能的任意终端设备,所述目标图像为半黑半彩色的图像,半彩色中的彩色指的是白色、红色、绿色和黄色等可视的颜色。也就是说,在需要确定所述第一设备所对应视点的宽度时(也即在目标设备的屏幕上覆盖的3D膜所对应视点的宽度),所述第一设备以3D模式显示半黑半彩色图像,之后所述第二设备在不同位置对所述第一设备进行拍照,得到所述第一组图像和所述第二组图像。In this embodiment, the first group of images and the second group of images are the second device simultaneously shooting the target image at different positions through two cameras set on the second device obtained, and the two cameras of the second device are on the same horizontal line, the distance between the centers of the two cameras of the second device is a preset distance, such as 65mm, and the second device is equipped with an image acquisition function and a communication function For any terminal device, the target image is a half-black half-color image, and the color in the half-color refers to visible colors such as white, red, green, and yellow. That is to say, when it is necessary to determine the width of the viewpoint corresponding to the first device (that is, the width of the viewpoint corresponding to the 3D film covered on the screen of the target device), the first device displays half black and half half in 3D mode. After that, the second device takes pictures of the first device at different positions to obtain the first group of images and the second group of images.
402、所述第二设备分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到第一区域、第二区域、第三区域和第四区域。402. The second device respectively segments two images in the first group of images and two images in the second group of images to obtain a first area, a second area, a third area, and Fourth area.
本实施例中,所述第二设备在得到所述第一组图像和所述第二组图像之后,可以分别对所述第一组图像中的两个图像进行分割,得到所述第一区域和所述第二区域,并对所述第二组图像中的两个图像进行分割,得到所述第三区域和所述第四区域,也即 由于所述第二设备上设置的两个摄像头在拍摄所述目标图像时,单个摄像头拍摄得到的图像会包括除了所述目标图像在内的其他内容,此处分割的目的是为了保证所述第一区域、所述第二区域、所述第三区域和所述第四区域中只包括所述目标图像,而不包括其他的内容。In this embodiment, after the second device obtains the first group of images and the second group of images, it can respectively segment two images in the first group of images to obtain the The first area and the second area, and the two images in the second group of images are segmented to obtain the third area and the fourth area, that is, due to the setting on the second device When the two cameras of the two cameras capture the target image, the image captured by a single camera will include other content besides the target image. The purpose of the segmentation here is to ensure that the first area, the second area , the third area and the fourth area only include the target image, and do not include other content.
403、所述第二设备分别计算所述第一区域与所述第二区域的第一像素平均差值以及所述第三区域与所述第四区域的第二像素平均差值。403. The second device respectively calculates a first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area.
本实施例中,所述第二设备在对所述第一组图像和所述第二组图像进行分割得到所述第一区域、所述第二区域、所述第三区域和所述第四区域之后,可以分别计算所述第一区域与所述第二区域的所述第一像素平均差值,并计算所述第三区域与所述第四区域的所述第二像素平均差值。具体的,所述第二设备可以通过如下公式计算所述第一像素平均差值和所述第二像素平均差值:In this embodiment, the second device divides the first group of images and the second group of images to obtain the first region, the second region, the third region and the After the fourth area, the first pixel average difference between the first area and the second area can be calculated respectively, and the second pixel average difference between the third area and the fourth area can be calculated value. Specifically, the second device may calculate the first pixel average difference and the second pixel average difference by the following formula:
Figure PCTCN2022117710-appb-000001
Figure PCTCN2022117710-appb-000001
其中,aver_piexl为所述第一像素平均差值或所述第二像素平均差值,A为屏幕区域,Al为所述第一区域或所述第三区域,Ar为所述第二区域或所述第四区域,w为所述第一区域的宽度或所述第三区域的宽度,h为所述第二区域的高度或所述第四区域的高度,所述第一区域和所述第二区域的宽度相同,高度相同,所述第三区域和所述第四区域的宽度相同,高度相同。Wherein, aver_piexl is the average difference value of the first pixel or the average difference value of the second pixel, A is the screen area, Al is the first area or the third area, Ar is the second area or the The fourth area, w is the width of the first area or the width of the third area, h is the height of the second area or the height of the fourth area, the first area and the first area The two areas have the same width and the same height, and the third area and the fourth area have the same width and the same height.
可以理解的是,所述第二设备在对所述目标图像进行拍摄得到所述第一组图像和所述第二组图像时,所述第一组图像和所述第二组图像中可能不会只包含所述目标图像,有可能还会包括有其他的内容,因此此处需要确定的是屏幕区域的像素平均差值,而不是所述第一组图像的像素平均差值,所述屏幕区域为所述目标图像在所述第一设备的屏幕中的显示区域。另外,所述第一设 备在所述显示目标图像的同时,还可以在所述第一设备所对应的屏幕上实时显示所述第二设备的位置坐标,或者直接显示所述第二设备的摄像头的位置坐标。It can be understood that when the second device captures the target image to obtain the first group of images and the second group of images, the first group of images and the second group of images The image may not only contain the target image, but may also include other content, so what needs to be determined here is the average pixel difference value of the screen area, not the pixel average difference value of the first group of images value, the screen area is the display area of the target image on the screen of the first device. In addition, while displaying the target image, the first device may also display the position coordinates of the second device in real time on the screen corresponding to the first device, or directly display the location coordinates of the camera of the second device location coordinates.
404、若所述第一像素平均差值达到第一预设值,则所述第二设备发送第一坐标记录指令至所述第一设备,以使得所述第一设备记录第一位置坐标。404. If the first pixel average difference reaches a first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records the first position coordinates.
本实施例中,所述第二设备在对所述第一组图像进行分析得到所述第一像素平均差值之后,可以判断所述第一像素平均差值是否达到所述第一预设值,若所述第一像素平均差值达到所述第一预设值,则所述第二设备发送所述第一坐标记录指令至所述第一设备,以使得所述第一设备根据所述第一坐标记录指令记录所述第一位置坐标,其中,所述第一位置坐标为所述第二设备在拍摄所述第一组图像时所处位置的坐标。也就是说,所述第一设备实时显示所述第二设备的位置坐标,在所述第二设备确定所述第一像素平均差值达到所述第一预设值时,可以向所述第一设备发送所述第一坐标记录指令,所述第一设备在接收到所述第一坐标记录指令后,可以记录所述第二设备在拍摄所述第一组图像所处位置的位置坐标。In this embodiment, after the second device analyzes the first group of images to obtain the first pixel average difference, it can determine whether the first pixel average difference reaches the first preset value, if the first pixel average difference reaches the first preset value, then the second device sends the first coordinate recording instruction to the first device, so that the first device The first coordinate recording instruction records the first position coordinates, wherein the first position coordinates are the coordinates of the position of the second device when capturing the first group of images. That is to say, the first device displays the position coordinates of the second device in real time, and when the second device determines that the average difference of the first pixels reaches the first preset value, it may send a message to the second device A device sends the first coordinate recording instruction, and after receiving the first coordinate recording instruction, the first device may record the position coordinates of the second device where the first group of images is taken .
可以理解的是,当所述第一像素平均差值未达到所述第一预设值时,所述第二设备变换位置再次对所述第一设备进行拍摄,得到变换位置后拍摄的图像,并对所述图像进行分析,直至变换位置后拍摄的所述图像的像素平均差值达到所述第一预设值时,发送所述第一坐标记录指令至所述第一设备,以使得所述第一设备记录该位置的位置坐标。It can be understood that, when the first pixel average difference value does not reach the first preset value, the second device changes position and shoots the first device again to obtain an image captured after the position change, and analyzing the image until the average pixel difference of the image captured after changing the position reaches the first preset value, sending the first coordinate recording instruction to the first device, so that the The first device records the location coordinates of the location.
405、若所述第二像素平均差值达到第二预设值,则所述第二设备发送第二坐标记录指令至所述第一设备,以使得所述第一设备记录第二位置坐标,并根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应光栅的贴合角度确定所述第一设备所对应视点的宽度。405. If the second pixel average difference reaches a second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records the second position coordinates, And determine the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the bonding angle of the grating corresponding to the first device.
本实施例中,所述第二设备在对所述第二组图像进行分析得 到所述第二像素平均差值之后,可以判断所述第二像素平均差值是否达到所述第二预设值,若所述第二像素平均差值达到所述第二预设值,则所述第二设备发送所述第二坐标记录指令至所述第一设备,以使得所述第一设备记录所述第二位置坐标,并根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应光栅的贴合角度确定所述第一设备所对应视点的宽度,其中,所述第二位置坐标为所述第二设备在拍摄所述第二组图像时所处位置的坐标。也就是说,所述第一设备实时显示所述第二设备的位置坐标,在所述第二设备确定所述第二像素平均差值达到所述第二预设值时,可以向所述第一设备发送所述第二坐标记录指令,所述第一设备在接收到所述第二坐标记录指令之后,可以记录该位置的位置坐标,并根据两个位置坐标和所述贴合角度计算视点的宽度。In this embodiment, after the second device analyzes the second group of images to obtain the second pixel average difference, it can determine whether the second pixel average difference reaches the second preset value, if the second pixel average difference reaches the second preset value, the second device sends the second coordinate record instruction to the first device, so that the first device records the The second position coordinates, and determine the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device, wherein the The second position coordinates are the coordinates of the position of the second device when capturing the second group of images. That is to say, the first device displays the position coordinates of the second device in real time, and when the second device determines that the average difference of the second pixels reaches the second preset value, it may send a message to the second device A device sends the second coordinate recording instruction, and after receiving the second coordinate recording instruction, the first device can record the position coordinates of the position, and calculate the viewpoint according to the two position coordinates and the lamination angle width.
可以理解的是,当所述第二像素平均差值未达到所述第二预设值时,所述第二设备变换位置再次对所述第一设备进行拍摄,得到变换位置后拍摄的图像,并对所述图像进行分析,直至变换位置后拍摄的图像的像素平均差值达到所述第二预设值,并发送所述第二坐标记录指令至所述第一设备,以使得所述第一设备记录该位置的位置坐标。It can be understood that, when the second pixel average difference value does not reach the second preset value, the second device changes position and shoots the first device again to obtain an image captured after the position change, and analyzing the image until the average pixel difference of the image captured after changing the position reaches the second preset value, and sending the second coordinate recording instruction to the first device, so that the second A device records the location coordinates of the location.
需要说明的是,所述第二设备在一个位置对所述第一设备进行拍摄得到所述位置所拍摄的所述第一设备的图像之后,直接对所述图像进行分析,得到对应的像素平均差值,并判断所述像素平均差值是否达到所述第一预设值或所述第二预设值,若均未达到,则变换所述第二设备的位置重复执行上述步骤,直至找到某个位置拍摄的所述第一设备的图像的像素平均差值达到所述第一预设值,并在达到所述第一预设值时,向所述第一设备发送所述第一坐标记录指令,以使得所述第一设备记录对应的位置坐标,之后继续调整位置执行上述步骤,直至找到另一个位置拍摄的所述第一设备的图像的像素平均差值达到所述第二预设值,发送所述第二坐标记录指令至所述第一设备,以使得所述第一设备记录 该位置的坐标,并根据两个位置的坐标和所述贴合角度计算所述视点的宽度。It should be noted that, after the second device captures the first device at a position to obtain the image of the first device captured at the position, it directly analyzes the image to obtain the corresponding pixel average difference, and judge whether the pixel average difference reaches the first preset value or the second preset value, if not, then change the position of the second device and repeat the above steps until it finds The pixel average difference value of the image of the first device captured at a certain position reaches the first preset value, and when the first preset value is reached, sending the first coordinates to the first device Recording instructions, so that the first device records the corresponding position coordinates, and then continues to adjust the position to perform the above steps until the average pixel difference of the image of the first device captured at another position reaches the second preset value, sending the second coordinate record command to the first device, so that the first device records the coordinates of the position, and calculates the width of the viewpoint according to the coordinates of the two positions and the lamination angle.
上面分别从第二设备和第一设备的角度对本申请实施例提供的视点宽度的确定方法进行说明,下面结合图5从第一设备与第二设备交互的角度对本申请实施例提供的视点宽度的确定方法进行说明。The method for determining the viewpoint width provided by the embodiment of the present application is described above from the perspectives of the second device and the first device, and the method for determining the viewpoint width provided by the embodiment of the present application is described below from the perspective of interaction between the first device and the second device in conjunction with FIG. 5 Determine the method described.
请参阅图5,图5为本申请实施例提供的视点宽度的确定方法的另一实施例示意图,包括:Please refer to FIG. 5. FIG. 5 is a schematic diagram of another embodiment of the method for determining the viewpoint width provided by the embodiment of the present application, including:
501、第一设备以立体模式显示目标图像。501. The first device displays a target image in a stereoscopic mode.
502、第二设备对所述目标图像进行实时拍摄,得到第一组图像和第二组图像。502. The second device shoots the target image in real time to obtain a first group of images and a second group of images.
503、所述第二设备分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到第一区域、第二区域、第三区域和第四区域。503. The second device respectively segments two images in the first group of images and two images in the second group of images to obtain a first area, a second area, a third area, and Fourth area.
504、所述第二设备分别计算所述第一区域与所述第二区域的第一像素平均差值以及所述第三区域与所述第四区域的第二像素平均差值。504. The second device respectively calculates a first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area.
可以理解的是,步骤501至步骤504与图4中的步骤401至步骤403类似,上述图4中已经进行了详细说明,具体此处不再赘述。It can be understood that steps 501 to 504 are similar to steps 401 to 403 in FIG. 4 , which have been described in detail in FIG. 4 above, and will not be repeated here.
505、若所述第一像素平均差值达到第一预设值,则所述第二设备发送第一坐标记录指令至所述第一设备。505. If the first pixel average difference reaches a first preset value, the second device sends a first coordinate recording instruction to the first device.
506、所述第一设备根据所述第一坐标记录指令记录第一位置坐标。506. The first device records the first location coordinates according to the first coordinate recording instruction.
507、若所述第二像素平均差值达到第二预设值,则所述第二设备发送第二坐标记录指令至所述第一设备。507. If the second pixel average difference reaches a second preset value, the second device sends a second coordinate recording instruction to the first device.
508、所述第一设备根据所述第二坐标记录指令记录第二位置坐标。508. The first device records the second location coordinates according to the second coordinate recording instruction.
可以理解的是,步骤505至步骤508与图1和图4中的记录位置坐标的步骤类似,上述图1和图4中已经进行了详细说明, 具体此处不在赘述。It can be understood that steps 505 to 508 are similar to the steps of recording location coordinates in FIG. 1 and FIG. 4 , which have been described in detail in FIG. 1 and FIG. 4 , and will not be repeated here.
509、所述第一设备根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应光栅的贴合角度确定所述第一设备所对应视点的宽度。509. The first device determines the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
可以理解的是,步骤510与图1中的步骤104类似,上述图1中已经进行了详细说明,具体此处不在赘述。It can be understood that step 510 is similar to step 104 in FIG. 1 , which has been described in detail in FIG. 1 above, and details are not repeated here.
需要说明的是,上述各个实施例中以所述第一设备根据所述第一位置坐标、所述第二位置坐标和光栅的所述贴合角度计算所述第一设备所对应视点的宽度为例进行计算,也可以是所述第一设备在记录得到所述第一位置坐标和所述第二位置坐标之后,将所述第一位置坐标和所述第二位置坐标发送至所述第二设备,由所述第二设备根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应光栅的贴合角度计算所述第一设备所对应视点的宽度,之后将所述第一设备所对应视点的宽度发送至所述第一设备;另外,对所述第一组图像和所述第二组图像进行分析得到像素平均差值也可以由所述第一设备执行,具体此处不做限定。It should be noted that, in each of the above embodiments, the width of the viewpoint corresponding to the first device calculated by the first device according to the first position coordinate, the second position coordinate and the lamination angle of the grating is For example, after the first device records the first position coordinates and the second position coordinates, it may send the first position coordinates and the second position coordinates to the second device, the second device calculates the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device, and then the The width of the viewpoint corresponding to the first device is sent to the first device; in addition, the average pixel difference obtained by analyzing the first group of images and the second group of images may also be obtained by the first device Execution, the details are not limited here.
还需要说明的是,上述图1至图5已经对像素平均差值计算、位置计算和视点宽度计算进行了详细说明,此处的像素平均差值计算、位置计算和视点宽度计算与上述图1至图5中记载的计算方式相同,只是执行主体不同,具体此处不在赘述。It should also be noted that the above-mentioned Figures 1 to 5 have described in detail the calculation of pixel average difference, position calculation and viewpoint width calculation. The calculation method described in FIG. 5 is the same, but the execution subject is different, and details are not repeated here.
上面从视点宽度的确定方法的角度对本申请实施例进行说明,下面从视点宽度确定装置的角度对本申请实施例进行说明。所述视点宽度确定装置包括第一设备600和第二设备700。The embodiments of the present application are described above from the perspective of the method for determining the viewpoint width, and the embodiments of the present application are described below from the perspective of the apparatus for determining the viewpoint width. The apparatus for determining the viewpoint width includes a first device 600 and a second device 700 .
请参阅图6,图6为本申请实施例提供的第一设备的虚拟结构示意图,所述第一设备600包括:Please refer to FIG. 6. FIG. 6 is a schematic diagram of a virtual structure of a first device provided in an embodiment of the present application. The first device 600 includes:
显示单元601,用于以立体模式显示目标图像,以使得第二设备对所述目标图像进行实时拍摄,以得到第一组图像和第二组图像,并分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到第一区域、第二区域、第三区域和第四区域,且计算所述第一区域与所述第二区域的第一像素平均差 值以及所述第三区域与所述第四区域的第二像素平均差值,其中,所述第一组图像和所述第二组图像为所述第二设备在不同位置通过设置于所述第二设备的两个摄像头同时对所述目标图像进行拍摄得到,且所述第二设备的两个摄像头处于同一水平线,所述第二设备的两个摄像头的中心间距为预设距离,所述第一区域和所述第二区域与所述第一组图像中的两个图像相对应,所述第三区域和所述第四区域与所述第二组图像中的两个图像相对应;The display unit 601 is configured to display the target image in a stereoscopic mode, so that the second device shoots the target image in real time to obtain a first group of images and a second group of images, and respectively compares the first group of images The two images in the image and the two images in the second group of images are divided to obtain the first area, the second area, the third area and the fourth area, and the calculation of the first area and the second The first pixel average difference between the two regions and the second pixel average difference between the third region and the fourth region, wherein the first group of images and the second group of images are the first group of images The second device is obtained by simultaneously shooting the target image with two cameras installed on the second device at different positions, and the two cameras of the second device are on the same horizontal line, and the two cameras of the second device The distance between centers is a preset distance, the first area and the second area correspond to two images in the first group of images, and the third area and the fourth area correspond to the first group of images The two images in the two sets of images correspond to each other;
记录单元602,用于若所述第一设备接收到所述第二设备在所述第一像素平均差值达到第一预设值时发送的第一坐标记录指令,则所述第一设备记录所述第二设备拍摄所述第一组图像时的第一位置坐标;若所述第一设备接收到所述第二设在在所述第二像素平均差值达到第二预设值时发送的第二坐标记录指令,则记录记录所述第二设备拍摄所述第二组图像时的第二位置坐标;The recording unit 602 is configured to: if the first device receives the first coordinate recording instruction sent by the second device when the first pixel average difference reaches a first preset value, the first device records The first position coordinates when the second device captures the first group of images; if the first device receives the second set when the second pixel average difference reaches a second preset value The second coordinate recording instruction sent is to record and record the second position coordinates when the second device captures the second group of images;
确定单元603,用于根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应的光栅的贴合角度确定所述第一设备所对应的视点的宽度。The determining unit 603 is configured to determine the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
请参阅图7,为本申请实施例提供的第二设备的虚拟结构示意图,所述第二设备700包括:Please refer to FIG. 7, which is a schematic diagram of the virtual structure of the second device provided in the embodiment of the present application. The second device 700 includes:
拍摄单元701,用于对第一设备以立体模式显示的目标图像进行实时拍摄,以得到第一组图像和第二组图像,所述第一组图像和所述第二组图像为所述第二设备在不同位置通过设置于所述第二设备的两个摄像头同时对所述目标图像进行拍摄得到,且所述第二设备的两个摄像头处于同一水平线,所述第二设备的两个摄像头的中心间距为预设距离;The photographing unit 701 is configured to photograph the target image displayed by the first device in stereoscopic mode in real time to obtain a first group of images and a second group of images, the first group of images and the second group of images The second device obtains the target image by shooting the target image at different positions through the two cameras installed on the second device at the same time, and the two cameras of the second device are on the same horizontal line, the second device The distance between the centers of the two cameras is the preset distance;
分割单元702,用于分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到第一区域、第二区域、第三区域和第四区域,其中,所述第一区域和所述第二区域与所述第一组图像中的两个图像相对应,所述第三区域和所述第四区域与所述第二组图像中的两个图像相对应;A segmentation unit 702, configured to segment the two images in the first group of images and the two images in the second group of images respectively to obtain the first region, the second region, the third region and the second region Four areas, wherein the first area and the second area correspond to two images in the first set of images, and the third area and the fourth area correspond to the second set of images The two images in the image correspond to each other;
计算单元703,用于分别计算所述第一区域与所述第二区域的 第一像素平均差值以及所述第三区域与所述第四区域的第二像素平均差值;A calculation unit 703, configured to calculate a first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area;
收发单元704,用于若所述第一像素平均差值达到第一预设值,则发送第一坐标记录指令至所述第一设备,以使得所述第一设备记录第一坐标位置,所述第一位置坐标为所述第二设备在拍摄所述第一组图像时所处位置的坐标;若所述第二像素平均差值达到第二预设值,则发送第二坐标记录指令至所述第一设备,以使得所述第一设备记录第二位置坐标,并根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应的光栅的贴合角度确定所述第一设备所对应的视点的宽度,所述第二位置坐标为所述第二设备在拍摄所述第二组图像时所处位置的坐标。The transceiver unit 704 is configured to send a first coordinate recording instruction to the first device if the average difference value of the first pixel reaches a first preset value, so that the first device records the first coordinate position. The first position coordinates are the coordinates of the position of the second device when taking the first group of images; if the second pixel average difference reaches a second preset value, a second coordinate recording instruction is sent to the first device, so that the first device records the second position coordinates, and determines according to the first position coordinates, the second position coordinates, and the bonding angle of the corresponding grating The width of the viewpoint corresponding to the first device, and the second position coordinates are the coordinates of the position of the second device when capturing the second group of images.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and are not intended to limit it; although the application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: It is still possible to modify the technical solutions described in the foregoing embodiments, or perform equivalent replacements for some or all of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the various embodiments of the present application. scope.

Claims (10)

  1. 一种视点的宽度确定方法,其特征在于,包括:A method for determining the width of a viewpoint, comprising:
    第一设备以立体模式显示目标图像,以使得第二设备对所述目标图像进行实时拍摄,以得到第一组图像和第二组图像,并分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到第一区域、第二区域、第三区域和第四区域,且计算所述第一区域与所述第二区域的第一像素平均差值以及所述第三区域与所述第四区域的第二像素平均差值,其中,所述第一组图像和所述第二组图像为所述第二设备在不同位置通过设置于所述第二设备的两个摄像头同时对所述目标图像进行拍摄得到,且所述第二设备的两个摄像头处于同一水平线,所述第二设备的两个摄像头的中心间距为预设距离,所述第一区域和所述第二区域与所述第一组图像中的两个图像相对应,所述第三区域和所述第四区域与所述第二组图像中的两个图像相对应;The first device displays the target image in a stereoscopic mode, so that the second device shoots the target image in real time to obtain a first group of images and a second group of images, and respectively The two images and the two images in the second group of images are divided to obtain the first area, the second area, the third area and the fourth area, and the calculation of the first area and the second area The first pixel average difference and the second pixel average difference between the third area and the fourth area, wherein the first group of images and the second group of images are the second device in The different positions are obtained by shooting the target image at the same time with two cameras arranged on the second device, and the two cameras of the second device are on the same horizontal line, and the center distance between the two cameras of the second device is is a preset distance, the first area and the second area correspond to two images in the first group of images, and the third area and the fourth area correspond to the second group of images The two images in the image correspond to each other;
    若所述第一设备接收到所述第二设备在所述第一像素平均差值达到第一预设值时发送的第一坐标记录指令,则所述第一设备记录所述第二设备拍摄所述第一组图像时的第一位置坐标;If the first device receives the first coordinate recording instruction sent by the second device when the average difference of the first pixels reaches the first preset value, the first device records the photograph taken by the second device The first position coordinates of the first group of images;
    若所述第一设备接收到所述第二设备在所述第二像素平均差值达到第二预设值时发送的第二坐标记录指令,则所述第一设备记录记录所述第二设备拍摄所述第二组图像时的第二位置坐标;If the first device receives the second coordinate recording instruction sent by the second device when the second pixel average difference reaches a second preset value, the first device records the the second location coordinates when the second group of images was taken;
    所述第一设备根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应的光栅的贴合角度确定所述第一设备所对应的视点的宽度。The first device determines the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
  2. 根据权利要求1所述的方法,其特征在于,所述第一设备根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应的光栅的贴合角度确定所述第一设备所对应的视点的宽度包括:The method according to claim 1, wherein the first device determines the second The width of the viewpoint corresponding to a device includes:
    所述第一设备根据所述贴合角度和所述第一位置坐标确定所述第二设备的第一位置,所述第一位置为所述第一像素平均差值达到所述第一预设值时所述第二设备的位置;The first device determines the first position of the second device according to the bonding angle and the first position coordinates, and the first position is when the first pixel average difference reaches the first preset the location of said second device at the time of value;
    所述第一设备根据所述贴合角度和所述第二位置坐标确定所述第二设备的第二位置,所述第二位置为所述第二像素平均差值达到所述第二预设值时所述第二设备的位置;The first device determines a second position of the second device according to the bonding angle and the second position coordinates, and the second position is when the average difference of the second pixels reaches the second preset the location of said second device at the time of value;
    所述第一设备根据所述第一位置和所述第二位置确定所述视点的宽度。The first device determines a width of the viewpoint based on the first position and the second position.
  3. 根据权利要求2所述的方法,其特征在于,所述第一设备根据所述贴合角度和所述第一位置坐标确定所述第二设备的第一位置包括:The method according to claim 2, wherein the first device determining the first position of the second device according to the bonding angle and the first position coordinates comprises:
    所述第一设备通过如下公式计算所述第一位置:The first device calculates the first position by using the following formula:
    X 0′=x 0+(y 0-y)*tan(a); X 0 ′=x 0 +(y 0 -y)*tan(a);
    其中,X 0′为所述第一位置,所述第一位置坐标为(x 0,y 0),y为预设的常量,a为所述贴合角度。 Wherein, X 0 ′ is the first position, the coordinates of the first position are (x 0 , y 0 ), y is a preset constant, and a is the lamination angle.
  4. 根据权利要求3所述的方法,其特征在于,所述第一设备根据所述贴合角度和所述第二位置坐标确定所述第二设备的第二位置包括:The method according to claim 3, wherein the first device determining the second position of the second device according to the bonding angle and the second position coordinates comprises:
    所述第一设备通过如下公式计算所述第二位置:The first device calculates the second position by using the following formula:
    X 1′=x 1+(y 1-y)*tan(a); X 1 '=x 1 +(y 1 -y)*tan(a);
    其中,X 1′为所述第二位置,所述第二位置坐标为(x 1,y 1)。 Wherein, X 1 ′ is the second position, and the coordinates of the second position are (x 1 , y 1 ).
  5. 根据权利要求4所述的方法,其特征在于,所述第一设备根据所述第一位置和所述第二位置确定所述视点的宽度包括:The method according to claim 4, wherein the first device determining the width of the viewpoint according to the first position and the second position comprises:
    所述第一设备通过如下公式计算所述视点的宽度:The first device calculates the width of the viewpoint by the following formula:
    VW=abs(X 0′-X 1′); VW=abs(X 0 ′-X 1 ′);
    其中,VW为所述视点的宽度,abs为绝对值函数。Wherein, VW is the width of the viewpoint, and abs is an absolute value function.
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:
    获取所述光栅的宽度;Obtain the width of the raster;
    根据所述光栅的宽度和所述视点的宽度确定所述第一设备所对应的视点的排列布局;determining the arrangement and layout of the viewpoint corresponding to the first device according to the width of the grating and the width of the viewpoint;
    根据所述视点的排列布局以及用户的人眼位置变化对所述第一设备在立体模式下运行时显示的立体图像进行调整。The stereoscopic image displayed when the first device operates in the stereoscopic mode is adjusted according to the arrangement and layout of the viewpoints and the position change of the user's eyes.
  7. 一种视点宽度的确定方法,其特征在于,包括:A method for determining the width of a viewpoint, comprising:
    第二设备对第一设备以立体模式显示的目标图像进行实时拍摄,以得到第一组图像和第二组图像,所述第一组图像和所述第二组图像为所述第二设备在不同位置通过设置于所述第二设备的两个摄像头同时对所述目标图像进行拍摄得到,且所述第二设备的两个摄像头处于同一水平线,所述第二设备的两个摄像头的中心间距为预设距离;The second device shoots the target image displayed by the first device in stereo mode in real time to obtain a first group of images and a second group of images, the first group of images and the second group of images are the The second device obtains the target image by shooting the target image simultaneously with two cameras arranged on the second device at different positions, and the two cameras of the second device are at the same horizontal line, and the two cameras of the second device The distance between the centers of the cameras is the preset distance;
    所述第二设备分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到第一区域、第二区域、第三区域和第四区域,其中,所述第一区域和所述第二区域与所述第一组图像中的两个图像相对应,所述第三区域和所述第四区域与所述第二组图像中的两个图像相对应;The second device respectively segments two images in the first group of images and two images in the second group of images to obtain a first region, a second region, a third region and a fourth region area, wherein the first area and the second area correspond to two images in the first set of images, and the third area and the fourth area correspond to the second set of images The two images in correspond to;
    所述第二设备分别计算所述第一区域与所述第二区域的第一像素平均差值以及所述第三区域与所述第四区域的第二像素平均差值;The second device respectively calculates a first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area;
    若所述第一像素平均差值达到第一预设值,则所述第二设备发送第一坐标记录指令至所述第一设备,以使得所述第一设备记录第一坐标位置,所述第一位置坐标为所述第二设备在拍摄所述第一组图像时所处位置的坐标;If the first pixel average difference reaches a first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records the first coordinate position, and the The first position coordinates are the coordinates of the position of the second device when capturing the first group of images;
    若所述第二像素平均差值达到第二预设值,则所述第二设备发送第二坐标记录指令至所述第一设备,以使得所述第一设备记录第二位置坐标,并根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应的光栅的贴合角度确定所述第一设备所对应的视点的宽度,所述第二位置坐标为所述第二设备在拍摄所述第二组图像时所处位置的坐标。If the second pixel average difference reaches a second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records the second position coordinates, and according to The lamination angle of the grating corresponding to the first position coordinate, the second position coordinate and the first device determines the width of the viewpoint corresponding to the first device, and the second position coordinate is the width of the viewpoint corresponding to the first device. The coordinates of the location of the second device when capturing the second group of images.
  8. 根据权利要求7所述的方法,其特征在于,所述第二设备分别计算所述第一区域与所述第二区域的第一像素平均差值以及所述第三区域与所述第四区域的第二像素平均差值包括:The method according to claim 7, wherein the second device calculates the first pixel average difference between the first area and the second area and the first pixel average difference between the third area and the fourth area The second pixel average difference consists of:
    所述第二设备通过如下公式计算所述第一像素平均差值:The second device calculates the first pixel average difference by the following formula:
    Figure PCTCN2022117710-appb-100001
    Figure PCTCN2022117710-appb-100001
    其中,aver_piexl为所述第一像素平均差值,w为所述第一区域的宽度,h为所述第一区域的高度,Al为所述第一区域,Ar为所述第二区域,所述第一区域和所述第二区域的宽度相同,高度相同;Wherein, aver_piexl is the average difference value of the first pixel, w is the width of the first region, h is the height of the first region, Al is the first region, and Ar is the second region, so The first area and the second area have the same width and the same height;
    所述第二设备通过如下公式计算所述第二像素平均差值:The second device calculates the second pixel average difference by the following formula:
    Figure PCTCN2022117710-appb-100002
    Figure PCTCN2022117710-appb-100002
    其中,aver_piexl为所述第二像素平均差值,w为所述第三区域的宽度,h为所述第三区域的高度,Al为所述第三区域,Ar为所述第四区域,所述第三区域和所述第四区域的宽度相同,高度相同。Wherein, aver_piexl is the average difference value of the second pixel, w is the width of the third region, h is the height of the third region, Al is the third region, Ar is the fourth region, and The third area and the fourth area have the same width and the same height.
  9. 一种视点宽度确定装置,其特征在于,包括第一设备,所述第一设备包括:An apparatus for determining a viewpoint width, characterized in that it includes a first device, and the first device includes:
    显示单元,用于以立体模式显示目标图像,以使得第二设备对所述目标图像进行实时拍摄,以得到第一组图像和第二组图像,并分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到第一区域、第二区域、第三区域和第四区域,且计算所述第一区域与所述第二区域的第一像素平均差值以及所述第三区域与所述第四区域的第二像素平均差值,其中,所述第一组图像和所述第二组图像为所述第二设备在不同位置通过设置于所述第二设备的两个摄像头同时对所述目标图像进行拍摄得到,且所述第二设备的两个摄像头处于同一水平线,所述第二设备的两个摄像头的中心间距为预设距离,所述第一区域和所述第二区域与所述第一组图像中的两个图像相对应,所述第三区域和所述第四区域与所述第二组图像中的两个图像相对应;a display unit, configured to display the target image in a stereoscopic mode, so that the second device shoots the target image in real time to obtain a first group of images and a second group of images, and respectively compares the first group of images The two images in the image and the two images in the second group of images are divided to obtain the first area, the second area, the third area and the fourth area, and the calculation of the first area and the second The first pixel average difference value of the area and the second pixel average difference value between the third area and the fourth area, wherein the first group of images and the second group of images are the second The device is obtained by simultaneously shooting the target image with two cameras installed on the second device at different positions, and the two cameras of the second device are on the same horizontal line, and the two cameras of the second device The distance between centers is a preset distance, the first area and the second area correspond to two images in the first group of images, the third area and the fourth area correspond to the second The two images in the group image correspond;
    记录单元,用于若所述第一设备接收到所述第二设备在所述第一像素平均差值达到第一预设值时发送的第一坐标记录指令,则所述第一设备记录所述第二设备拍摄所述第一组图像时的第一位置坐标;若所述第一设备接收到所述第二设在在所述第二像素平均差值达到第二预设值时发送的第二坐标记录指令,则记录记录所述第二设备拍摄所述第二组图像时的第二位置坐标;A recording unit, configured to record the first coordinate recording instruction sent by the second device when the first pixel average difference reaches a first preset value, if the first device receives the first coordinate recording instruction. The first position coordinates when the second device captures the first group of images; if the first device receives the second set and sends the second pixel average difference value to a second preset value the second coordinate recording instruction, record and record the second position coordinates when the second device captures the second group of images;
    确定单元,用于根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应的光栅的贴合角度确定所述第一设备所对应的视点的宽度。The determining unit is configured to determine the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates, and the lamination angle of the grating corresponding to the first device.
  10. 根据权利要求9所述的装置,其特征在于,还包括第二设备,所述第二设备包括:The apparatus according to claim 9, further comprising a second device, the second device comprising:
    拍摄单元,用于对所述第一设备以立体模式显示的所述目标图像进行实时拍摄,以得到所述第一组图像和所述第二组图像;a photographing unit, configured to photograph the target image displayed by the first device in a stereoscopic mode in real time, so as to obtain the first group of images and the second group of images;
    分割单元,用于分别对所述第一组图像中的两个图像和所述第二组图像中的两个图像进行分割,得到所述第一区域、所述第二区域、所述第三区域和所述第四区域;a segmentation unit, configured to segment the two images in the first group of images and the two images in the second group of images respectively to obtain the first region, the second region, the a third area and said fourth area;
    计算单元,用于分别计算所述第一区域与所述第二区域的第一像素平均差值以及所述第三区域与所述第四区域的第二像素平均差值;a calculation unit, configured to calculate a first pixel average difference between the first area and the second area and a second pixel average difference between the third area and the fourth area;
    收发单元,用于若所述第一像素平均差值达到所述第一预设值,则发送所述第一坐标记录指令至所述第一设备,以使得所述第一设备记录所述第一坐标位置,所述第一位置坐标为所述第二设备在拍摄所述第一组图像时所处位置的坐标;若所述第二像素平均差值达到所述第二预设值,则发送所述第二坐标记录指令至所述第一设备,以使得所述第一设备记录所述第二位置坐标,并根据所述第一位置坐标、所述第二位置坐标和所述第一设备所对应的光栅的贴合角度确定所述第一设备所对应的视点的宽度,所述第二位置坐标为所述第二设备在拍摄所述第二组图像时所处位置的坐标。a transceiver unit, configured to send the first coordinate recording instruction to the first device if the first pixel average difference reaches the first preset value, so that the first device records the first A coordinate position, the coordinates of the first position being the coordinates of the position of the second device when capturing the first group of images; if the average difference of the second pixels reaches the second preset value, then sending the second coordinate recording instruction to the first device, so that the first device records the second position coordinates, and according to the first position coordinates, the second position coordinates and the first The lamination angle of the grating corresponding to a device determines the width of the viewpoint corresponding to the first device, and the second position coordinates are the coordinates of the position of the second device when shooting the second group of images .
PCT/CN2022/117710 2021-09-08 2022-09-08 Method and apparatus for determining width of viewpoint WO2023036218A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111049335.9A CN113781560B (en) 2021-09-08 2021-09-08 Viewpoint width determining method, device and storage medium
CN202111049335.9 2021-09-08

Publications (1)

Publication Number Publication Date
WO2023036218A1 true WO2023036218A1 (en) 2023-03-16

Family

ID=78841632

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/117710 WO2023036218A1 (en) 2021-09-08 2022-09-08 Method and apparatus for determining width of viewpoint

Country Status (2)

Country Link
CN (1) CN113781560B (en)
WO (1) WO2023036218A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781560B (en) * 2021-09-08 2023-12-22 未来科技(襄阳)有限公司 Viewpoint width determining method, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293691A1 (en) * 2011-05-27 2013-11-07 JVC Kenwood Corporation Naked-eye stereoscopic display apparatus, viewpoint adjustment method, and naked-eye stereoscopic vision-ready video data generation method
CN108259888A (en) * 2016-12-29 2018-07-06 深圳超多维光电子有限公司 The test method and system of stereo display effect
CN108683906A (en) * 2018-05-29 2018-10-19 张家港康得新光电材料有限公司 A kind of bore hole 3D display device parameter test method, device, equipment and medium
WO2019080295A1 (en) * 2017-10-23 2019-05-02 上海玮舟微电子科技有限公司 Naked-eye 3d display method and control system based on eye tracking
CN110139095A (en) * 2019-05-14 2019-08-16 深圳市新致维科技有限公司 A kind of naked eye 3D display mould group detection method, system and readable storage medium storing program for executing
CN113763472A (en) * 2021-09-08 2021-12-07 未来科技(襄阳)有限公司 Method and device for determining viewpoint width and storage medium
CN113781560A (en) * 2021-09-08 2021-12-10 未来科技(襄阳)有限公司 Method and device for determining viewpoint width and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69531583T2 (en) * 1994-10-14 2004-06-24 Canon K.K. Image processing method and device
JP7204357B2 (en) * 2017-09-20 2023-01-16 キヤノン株式会社 Imaging device and its control method
CN110599602B (en) * 2019-09-19 2023-06-09 百度在线网络技术(北京)有限公司 AR model training method and device, electronic equipment and storage medium
CN112731343B (en) * 2020-12-18 2023-12-12 福建汇川物联网技术科技股份有限公司 Target measurement method and device for measurement camera
CN113286084B (en) * 2021-05-21 2022-10-25 展讯通信(上海)有限公司 Terminal image acquisition method and device, storage medium and terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293691A1 (en) * 2011-05-27 2013-11-07 JVC Kenwood Corporation Naked-eye stereoscopic display apparatus, viewpoint adjustment method, and naked-eye stereoscopic vision-ready video data generation method
CN108259888A (en) * 2016-12-29 2018-07-06 深圳超多维光电子有限公司 The test method and system of stereo display effect
WO2019080295A1 (en) * 2017-10-23 2019-05-02 上海玮舟微电子科技有限公司 Naked-eye 3d display method and control system based on eye tracking
CN108683906A (en) * 2018-05-29 2018-10-19 张家港康得新光电材料有限公司 A kind of bore hole 3D display device parameter test method, device, equipment and medium
CN110139095A (en) * 2019-05-14 2019-08-16 深圳市新致维科技有限公司 A kind of naked eye 3D display mould group detection method, system and readable storage medium storing program for executing
CN113763472A (en) * 2021-09-08 2021-12-07 未来科技(襄阳)有限公司 Method and device for determining viewpoint width and storage medium
CN113781560A (en) * 2021-09-08 2021-12-10 未来科技(襄阳)有限公司 Method and device for determining viewpoint width and storage medium

Also Published As

Publication number Publication date
CN113781560B (en) 2023-12-22
CN113781560A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
US11388385B2 (en) Primary and auxiliary image capture devices for image processing and related methods
US8274552B2 (en) Primary and auxiliary image capture devices for image processing and related methods
US8208048B2 (en) Method for high dynamic range imaging
KR101391380B1 (en) System and method of image correction for multi-projection
US9635348B2 (en) Methods, systems, and computer-readable storage media for selecting image capture positions to generate three-dimensional images
US9544498B2 (en) Method for forming images
US9185388B2 (en) Methods, systems, and computer program products for creating three-dimensional video sequences
US8111910B2 (en) Stereoscopic image processing device, method, recording medium and stereoscopic imaging apparatus
US8760502B2 (en) Method for improving 3 dimensional effect and reducing visual fatigue and apparatus enabling the same
TWI433530B (en) Camera system and image-shooting method with guide for taking stereo photo and method for automatically adjusting stereo photo
US8675042B2 (en) Image processing apparatus, multi-eye digital camera, and program
Schmeing et al. Depth image based rendering: A faithful approach for the disocclusion problem
WO2023036218A1 (en) Method and apparatus for determining width of viewpoint
US9113153B2 (en) Determining a stereo image from video
KR101455662B1 (en) System and method of image correction for multi-projection
CN113763472B (en) Viewpoint width determining method and device and storage medium
KR102112491B1 (en) Method for description of object points of the object space and connection for its implementation
JP2012227653A (en) Imaging apparatus and imaging method
TW201325201A (en) 3-dimensional display which is capable of tracking viewer
Ra et al. Decoupled Hybrid 360° Panoramic Stereo Video
Voronov et al. System for automatic detection of distorted scenes in stereo video
CN105516706A (en) Method for making stereo images
US20210051310A1 (en) The 3d wieving and recording method for smartphones
JP2013070154A (en) Compound-eye imaging device and operation control method therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22866681

Country of ref document: EP

Kind code of ref document: A1