CN113781560B - Viewpoint width determining method, device and storage medium - Google Patents

Viewpoint width determining method, device and storage medium Download PDF

Info

Publication number
CN113781560B
CN113781560B CN202111049335.9A CN202111049335A CN113781560B CN 113781560 B CN113781560 B CN 113781560B CN 202111049335 A CN202111049335 A CN 202111049335A CN 113781560 B CN113781560 B CN 113781560B
Authority
CN
China
Prior art keywords
images
group
area
region
difference value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111049335.9A
Other languages
Chinese (zh)
Other versions
CN113781560A (en
Inventor
贺曙
徐万良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Future Technology Xiang Yang Co ltd
Original Assignee
Future Technology Xiang Yang Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Future Technology Xiang Yang Co ltd filed Critical Future Technology Xiang Yang Co ltd
Priority to CN202111049335.9A priority Critical patent/CN113781560B/en
Publication of CN113781560A publication Critical patent/CN113781560A/en
Priority to PCT/CN2022/117710 priority patent/WO2023036218A1/en
Application granted granted Critical
Publication of CN113781560B publication Critical patent/CN113781560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method for determining the width of a viewpoint, which comprises the following steps: displaying a target image so that a second device shoots the target image in real time to obtain a first group of images and a second group of images, dividing two images in the first group of images and two images in the second group of images respectively to obtain a first area, a second area, a third area and a fourth area, and calculating a first pixel average difference value of the first area and the second area and a second pixel average difference value of the third area and the fourth area; if the average difference value of the first pixels reaches a first preset value, recording a first position coordinate when the second equipment shoots a first group of images; if the average difference value of the second pixels reaches a second preset value, recording a second position coordinate when the second equipment shoots a second group of images; and determining the width of the viewpoint corresponding to the first equipment according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first equipment.

Description

Viewpoint width determining method, device and storage medium
[ field of technology ]
The application belongs to the field of naked eye 3D, and particularly relates to a method and device for determining viewpoint width and a storage medium.
[ background Art ]
Naked eye 3D, autostereicoscopy for short, is a generic term for technologies that implement stereoscopic effects without external tools such as polarized glasses.
In a naked eye 3D system with human eye tracking, equipment acquires images through a front-end camera and tracks the positions of human eyes, then a viewpoint corresponding to the current positions of the human eyes is calculated, and the width of each viewpoint in the naked eye 3D system needs to be determined in the process of acquiring the images through the arranged front-end camera and tracking the positions of the human eyes.
At present, the width of the viewpoint is mainly deduced through optical design, but in actual use, a grating is often used on a plurality of third party devices, and screen optical parameters of the device, such as glass thickness, optical adhesive thickness, assembly gap size, and the like, cannot be obtained exactly, so that the width of the viewpoint cannot be determined accurately.
[ invention ]
The purpose of the application is to provide a method, a device and a storage medium for determining the viewpoint width, which can quickly determine the width of the viewpoint corresponding to equipment under the condition that the screen optical parameters of the equipment are unknown, and further adjust the 3D image or 3D video displayed by the equipment through the viewpoint width, so that the watching experience of a user is improved.
An embodiment of the present application provides a method for determining a viewpoint width, including:
the method comprises the steps that a first device displays a target image in a stereoscopic mode, so that a second device shoots the target image displayed in the stereoscopic mode by the first device in real time to obtain a first group of images and a second group of images, the two images in the first group of images and the two images in the second group of images are respectively segmented to obtain a first area, a second area, a third area and a fourth area, a first pixel average difference value of the first area and the second area and a second pixel average difference value of the third area and the fourth area are calculated, wherein the first group of images and the second group of images are obtained by shooting the target image by the second device at different positions through two cameras arranged in the second device at the same time, the center-to-center distances of the two cameras in the second device are in the same horizontal line, the first area and the second area correspond to the first group of images, and the second area correspond to the second area and the fourth area of images;
If the first equipment receives a first coordinate recording instruction sent by the second equipment when the average difference value of the first pixels reaches a first preset value, the first equipment records a first position coordinate when the second equipment shoots the first group of images;
if the first device receives a second coordinate recording instruction sent when the average difference value of the second pixels reaches a second preset value, the first device records and records second position coordinates when the second device shoots the second group of images;
and the first equipment determines the width of the viewpoint corresponding to the first equipment according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first equipment.
Optionally, the determining, by the first device, the width of the viewpoint corresponding to the first device according to the first position coordinate and the second position coordinate includes:
the first device determines a first position of the second device according to the attaching angle and the first position coordinate, wherein the first position is the position of the second device when the average difference value of the first pixels reaches the first preset value;
The first device determines a second position of the second device according to the attaching angle and the second position coordinate, wherein the second position is the position of the second device when the second pixel average difference value reaches the second preset value;
and determining the width of the viewpoint according to the first position and the second position.
Optionally, the determining, by the first device, the first position of the second device according to the fitting angle and the first position coordinate includes:
the first device calculates the first location by the following formula:
X 0 ′=x 0 +(y 0 -y)*tan(a);
wherein X is 0 ' is the first position, the first position coordinates are (x 0 ,y 0 ) Y is a preset constant, and a is the fitting angle;
the first device determining the second position of the second device according to the fitting angle and the second position coordinates includes:
the first device calculates the second location by the following formula:
X 1 ′=x 1 +(y 1 -y)*tan(a);
wherein X is 1 ' is the second position, the second position coordinates are (x 1 ,y 1 );
The first device determining a width of the viewpoint according to the first location and the second location includes:
the first device calculates the width of the viewpoint by the following formula:
VW=abs(X 0 ′-X 1 ′);
Wherein VW is the width of the viewpoint and abs is an absolute function.
Optionally, the method further comprises:
acquiring the width of the grating;
determining the arrangement layout of the view points corresponding to the first equipment according to the width of the grating and the width of the view points;
and adjusting the stereoscopic image displayed when the first device operates in the stereoscopic mode according to the arrangement layout of the viewpoints and the human eye position change of the user.
A second aspect of the embodiments of the present application provides a method for determining a viewpoint width, including:
the method comprises the steps that a second device shoots a target image displayed by a first device in a three-dimensional mode in real time to obtain a first group of images and a second group of images, wherein the first group of images and the second group of images are obtained by shooting the target image at different positions by two cameras arranged on the second device at the same time, the two cameras of the second device are positioned on the same horizontal line, and the center distance between the two cameras of the second device is a preset distance;
the second device divides two images in the first group of images and two images in the second group of images respectively to obtain a first area, a second area, a third area and a fourth area, wherein the first area and the second area correspond to the two images in the first group of images, and the third area and the fourth area correspond to the two images in the second group of images;
The second device calculates a first pixel average difference value of the first region and the second region and a second pixel average difference value of the third region and the fourth region respectively;
if the average difference value of the first pixels reaches a first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records a first coordinate position, wherein the first position coordinate is the coordinate of the position of the second device when the first group of images are shot;
if the average difference value of the second pixels reaches a second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records a second position coordinate, the width of the view point corresponding to the first device is determined according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first device, and the second position coordinate is the coordinate of the position of the second device when the second device shoots the second group of images.
Optionally, the second device calculating the first pixel average difference value of the first region and the second pixel average difference value of the third region and the fourth region respectively includes:
The second device calculates the first pixel average difference value by the following formula:
wherein aver_piexl is the average difference value of the first pixels, w is the width of the first region, h is the height of the first region, al is the first region, ar is the second region, and the widths and heights of the first region and the second region are the same;
the second device calculates the second pixel mean difference value by the following formula:
wherein aver_piexl is the average difference value of the second pixels, w is the width of the third region, h is the height of the third region, al is the third region, ar is the fourth region, and the widths and heights of the third region and the fourth region are the same.
A third aspect of the present application provides an apparatus, where the apparatus is a first apparatus, and includes:
the display unit is used for displaying a target image in a stereoscopic mode, so that a second device shoots the target image displayed in the stereoscopic mode by a first device in real time to obtain a first group of images and a second group of images, the two images in the first group of images and the two images in the second group of images are respectively segmented to obtain a first area, a second area, a third area and a fourth area, a first pixel average difference value of the first area and the second area and a second pixel average difference value of the third area and the fourth area are calculated, wherein the first group of images and the second group of images are obtained by shooting the target image by the second device at different positions through two cameras arranged in the second device at the same time, the center-to-center distances of the two cameras in the second device are preset distances, and the first area and the second area correspond to the two images in the first group and the second area;
The recording unit is used for recording a first position coordinate when the second device shoots the first group of images if the first device receives a first coordinate recording instruction sent by the second device when the first pixel average difference value reaches a first preset value; if the first equipment receives a second coordinate recording instruction which is sent when the average difference value of the second pixels reaches a second preset value, recording and recording second position coordinates when the second equipment shoots the second group of images;
and the determining unit is used for determining the width of the view point corresponding to the first equipment according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first equipment.
A fourth aspect of the present application provides an apparatus, where the apparatus is a second apparatus, and includes:
the shooting unit is used for shooting a target image displayed by a first device in a stereoscopic mode in real time to obtain a first group of images and a second group of images, wherein the first group of images and the second group of images are obtained by shooting the target image at different positions by two cameras arranged on the second device at the same time, the two cameras of the second device are positioned on the same horizontal line, and the center distance between the two cameras of the second device is a preset distance;
The segmentation unit is used for respectively segmenting two images in the first group of images and two images in the second group of images to obtain a first region, a second region, a third region and a fourth region, wherein the first region and the second region correspond to the two images in the first group of images, and the third region and the fourth region correspond to the two images in the second group of images;
a calculating unit, configured to calculate a first pixel average difference value of the first region and the second region and a second pixel average difference value of the third region and the fourth region, respectively;
the receiving and transmitting unit is used for transmitting a first coordinate recording instruction to the first device if the average difference value of the first pixels reaches a first preset value, so that the first device records a first coordinate position, wherein the first position coordinate is the coordinate of the position of the second device when the first group of images are shot; and if the average difference value of the second pixels reaches a second preset value, sending a second coordinate recording instruction to the first device so that the first device records a second position coordinate, and determining the width of the view point corresponding to the first device according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first device, wherein the second position coordinate is the coordinate of the position of the second device when the second device shoots the second group of images.
A fifth aspect of the embodiments of the present application provides a computer device, which includes at least one connected processor, a memory, and a transceiver, where the memory is configured to store program code, and the processor is configured to invoke the program code in the memory to perform the steps of the method for determining a viewpoint width in the foregoing aspects.
A sixth aspect of the embodiments of the present application provides a computer storage medium including instructions which, when run on a computer, cause the computer to perform the steps of the method for determining a viewpoint width described in the above aspects.
In the embodiments provided herein, with respect to the related art.
[ description of the drawings ]
Fig. 1 is a schematic diagram of an embodiment of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 2 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 3 is an application scenario schematic diagram of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 4 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 5 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application;
Fig. 6 is a schematic virtual structure of a first device according to an embodiment of the present application;
fig. 7 is a schematic virtual structure of a second device according to an embodiment of the present application;
fig. 8 is a schematic hardware structure of a first device and a second device provided in an embodiment of the present application.
[ detailed description ] of the invention
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those explicitly listed but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that the division of modules by such means may occur in the present application by only one logical division, such that a plurality of modules may be combined or integrated in another system, or some feature vectors may be omitted, or not implemented, and further such that the coupling or direct coupling or communication connection between such displayed or discussed modules may be through some interfaces, such that indirect coupling or communication connection between such modules may be electrical or other similar, none of which are intended to be limiting in this application. The modules or sub-modules described as separate components may or may not be physically separate, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purposes of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of an embodiment of a method for determining a viewpoint width according to an embodiment of the present application, including:
101. the first device displays the target image in the stereoscopic mode, so that the second device shoots the target image displayed in the stereoscopic mode by the first device in real time to obtain a first group of images and a second group of images, divides two images in the first group of images and two images in the second group of images respectively to obtain a first area, a second area, a third area and a fourth area, and calculates a first pixel average difference value of the first area and the second area and a second pixel average difference value of the third area and the fourth area.
In this embodiment, when the width of the viewpoint corresponding to the first device needs to be determined, the first device displays the target image in a stereoscopic mode, so that the second device photographs the target image displayed in the stereoscopic mode by the first device in real time, so as to obtain a first group of images and a second group of images, the two images in the first group of images and the two images in the second group of images are respectively segmented, so as to obtain a first area, a second area, a third area and a fourth area, and calculate a first pixel average difference value between the first area and the second area and a second pixel average difference value between the third area and the fourth area, wherein the first group of images and the second group of images are obtained by photographing the target image at different positions through two cameras arranged in the second device, the two cameras of the second device are positioned on the same horizontal line, the center distances between the two cameras of the second device are preset distances, the first area and the second area correspond to the first group of images, the second area and the second area correspond to the second image, the second area and the second area correspond to the first image, the second image and the second image capture function is a half-color image, and the second image capture function is a half-color communication function, and the half-color communication function is that the half-color image is a half-color image, and the color communication function is obtained in the half-color image, and the color image capture function is the half-color image. It will be appreciated that when the second device captures the target image to obtain the first set of images and the second set of images, each image in the first set of images and the second set of images may not only include the target image, but may also include other contents, so that it is required to determine the pixel average difference value of the screen area, which is the display area of the target image in the screen of the first device, instead of the pixel average difference value of each image. In addition, the first device may display the position coordinate of the second device in real time on the screen corresponding to the first device while displaying the target image, or directly display the position coordinate of the camera of the second device.
102. If a first coordinate recording instruction sent by the second equipment when the average difference value of the first pixels reaches a first preset value is received, the first equipment records a first position coordinate of a position where the second equipment is located when the second equipment shoots a first group of images.
In this embodiment, after the second device divides the first group of images to obtain the first region and the second region and calculates the first pixel average difference value between the first region and the second region, it may be determined whether the first pixel average difference value reaches a first preset value, if the first pixel average difference value reaches the first preset value, the second device may send a first coordinate recording instruction, and the first device records, according to the first coordinate recording instruction, a first position coordinate of a position where the second device is located when the second device captures the first group of images.
103. And if a second coordinate recording instruction sent by the second equipment when the second pixel average difference value reaches a second preset value is received, the first equipment records second position coordinates of the position where the second equipment is positioned when the second equipment shoots a second group of images.
In this embodiment, after the second device divides the second group of images to obtain the third region and the fourth region and calculates the second pixel average difference value of the third region and the fourth region, it may be determined whether the second pixel average difference value reaches a second preset value, if the second pixel average difference value reaches the second preset value, the second device may send a second coordinate recording instruction, and the first device records, according to the received coordinate recording instruction, the second position coordinate of the position where the second device is located when the second device photographs the second group of images.
104. The first device determines the width of the view point corresponding to the first device according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first device.
In this embodiment, after the first position coordinates and the second position coordinates are recorded, the first device may obtain the bonding angle of the grating corresponding to the first device (the bonding angle is the bonding angle of the grating of the 3D film bonded on the first device, and the manner of obtaining the bonding angle of the grating is not limited herein, for example, may be input by a user), and determine the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates and the bonding angle.
In one embodiment, determining, by the first device, the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the fitting angle of the grating corresponding to the first device includes:
the first equipment determines a first position of the second equipment according to the attaching angle and the first position coordinate, wherein the first position is the position of the second equipment when the average difference value of the first pixels reaches a first preset value;
the first device determines a second position of the second device according to the attaching angle and the second position coordinate, wherein the second position is a position where the second device is located when the average difference value of the second pixels reaches a second preset value;
The first device determines a width of the viewpoint based on the first location and the second location.
In this embodiment, the second device may calculate, based on the fitting angle and the first position coordinate, a first position according to the following formula, where the second device captures a first group of images when the average difference value of the first pixels reaches a first preset value:
X 0 ′=x 0 +(y 0 -y)*tan(a);
wherein X is 0 ' is the first position, the first position coordinates are (x 0 ,y 0 ) Y is a preset constant, and a is a fitting angle;
and then the second device can calculate a second position according to the following formula based on the fitting angle and the second position coordinate, wherein the second position is a position where the second device shoots a second group of images when the average difference value of the second pixels reaches a second preset value:
X 1 ′=x 1 +(y 1 -y)*tan(a);
wherein X is 1 ' is the second position, and the second position coordinates are (x 1 ,y 1 );
After the second device calculates the first position and the second position according to the formula, the width of the viewpoint corresponding to the first device may be calculated according to the following formula based on the first position and the second position:
VW=abs(X 0 ′-X 1 ′);
wherein VW is the width of the viewpoint corresponding to the first device, and abs is an absolute function.
The width calculation of the viewpoint will be described with reference to fig. 2, where fig. 2 is a schematic view of the width calculation of the viewpoint according to the embodiment of the present application, and 201 is a first position coordinate (x 0 ,y 0 ) 202 is the second position coordinate (x 1 ,y 1 ) 203 are coordinates of a preset constant y in a coordinate system (it will be understood that the preset constant y may be set to be half of the width of the screen area, or may be set according to practical situations, and is not limited specifically), so as to calculate the first position X 0 ' illustrated as an example, in calculating the first position X 0 When' the fitting angle a of the grating is known, the first position coordinate (x 0 ,y 0 ) Then, the preset constant 203 is converted to the same direction as the Y-axis direction of the first position coordinate, and then the formula X can be adopted 0 ′=x 0 +(y 0 -y) tan (a) calculating the first position, and similarly the second position, then vw=abs (X) 0 ′-X 1 ') calculating to obtain the absolute value of the difference between the first position and the second position, namely the width of the view point corresponding to the first device.
It should be noted that, after the width of the viewpoint corresponding to the first device is obtained by the first device, the 3D image or the 3D video that is displayed when the first device operates in the 3D mode may be adjusted based on the width of the viewpoint, and when the 3D image or the 3D video that is displayed when the first device operates in the 3D mode is adjusted based on the width of the viewpoint, the specific first device may obtain the width of the grating corresponding to the first device, then determine the arrangement layout of the viewpoint corresponding to the first device according to the width of the grating and the width of the viewpoint, and adjust the stereoscopic image that is displayed when the first device operates in the stereoscopic mode according to the arrangement layout of the viewpoint and the eye position change of the user. That is, after the width of the viewpoint is obtained, since the width of the grating of the first device is known, the arrangement layout of the grating of the 3D film attached to the screen of the first device can be calculated, and then the 3D image displayed or the 3D video played when the first device operates in the stereoscopic mode can be adjusted according to the position change of human eyes, so that a better 3D display effect is provided for the user.
It should be noted that, the position of the first device is unchanged, the second device transforms the position of the second device, and the first device tracks the position coordinates of the camera of the second device, and records the position coordinates of the second device when transforming the position, and of course, other modes may also be used, for example, the position of the second device is unchanged, and the first device transforms the position to record the position coordinates of the camera of the second device, so long as the position coordinates of the second device when capturing images of the first device at different positions can be recorded. It can be understood that, when the position of the detector is unchanged and the human eye coordinates of the detector are recorded by changing the position of the target device, the specific implementation process is as follows:
when the first device displays a target image in a 3D mode, the second device shoots the first device, the position coordinates of the second device (of course, the position coordinates of a camera of the second device can also be displayed, and the position is not limited in detail) are displayed on a screen of the first device, the first device adjusts the position until a second device coordinate recording instruction is received, the position coordinates of the second device at the current position are recorded, the coordinate recording instruction is sent out when the second device analyzes the image obtained by shooting the first device to obtain a corresponding pixel average difference value, and the pixel average difference value reaches a first preset value or a second preset value; then, continuously adjusting the position until a coordinate recording instruction sent by the second equipment is received again, and recording the position coordinate of the second equipment at the current position, wherein the coordinate recording instruction is sent out by the second equipment when the second equipment analyzes an image obtained by shooting the first equipment to obtain a corresponding pixel average difference value, and the pixel average difference value reaches a first preset value or a second preset value; therefore, the position coordinates of two different positions can be obtained, and the width of the viewpoint is calculated according to the position coordinates of the two different positions and the attaching angle.
In summary, it can be seen that in the embodiment provided by the present application, when determining the viewpoint width of the first device, the second device may capture the first device at different positions to obtain multiple images, divide the multiple images to obtain two image areas, and calculate the average difference value of pixels in the two image areas, and further record, by the first device, the position coordinate of the second device when the average difference value of pixels reaches the preset value, and then calculate the width of the viewpoint corresponding to the first device according to the position coordinate of the second device at different positions and the fitting angle of the grating, so that the width of the viewpoint corresponding to the first device can be quickly determined under the condition that the screen optical parameter of the first device is unknown, and then adjust the 3D image or the 3D video displayed by the first device through the viewpoint width, so as to improve the viewing experience of the user.
Referring to fig. 3, fig. 3 is a schematic view of an application scenario provided in this embodiment, fig. 3 is a schematic view of an application scenario in which the position of a first device is fixed, the position of a second device is changed, and the first device records the position coordinates of the position of the second device when the average difference value of pixels of images captured by the second device reaches a preset value, as shown in fig. 3, when the viewpoint width of a 3D film set on the first device 301 needs to be determined, the first device 301 displays a target image in a 3D mode, the second device captures the target image displayed in the 3D mode by two cameras set on the second device at different positions, so as to obtain corresponding images, segments the images, obtains two image areas, calculates the average difference value of pixels of the two image areas, then determines whether the average difference value of pixels reaches the first preset value or the second preset value, if the average difference value of pixels reaches the first preset value or the second preset value, after the first device 301 receives a coordinate recording instruction, and when the first device 301 receives the coordinate recording instruction, the first device records the first device 301 at the position when the average difference value of pixels of the two image areas reaches the first preset value 301, and the first device 302 can record the coordinate at the position of the first device at the first position when the first device 302; then the second device changes the position again, and shoots the first device again to obtain a corresponding image, and segments the image to obtain two image areas, and calculates the average difference value of pixels of the two image areas, then judges whether the average difference value of pixels reaches a second preset value, if the average difference value of pixels reaches the second preset value, the second device sends a coordinate recording instruction to the first device 301, at this time, the first device 301 records the second position coordinate according to the coordinate recording instruction, for example, when the second device is at the 303 position in fig. 3, the second device sends an instruction of recording the position coordinate to the first device 301, the first device 301 records the position coordinate when the second device is at the 303 position according to the coordinate recording instruction, then the first device 301 can calculate the width corresponding to the first device 301 according to the position coordinate when the second device is at the 302 position, the position coordinate when the second device is at the 303 position, and the angle of the grating corresponding to the first device 301, and then adjusts the 3D or 3D fitting mode of the video displayed by the first device 301 according to the width of the viewpoint. Therefore, the width of the viewpoint corresponding to the first device can be rapidly determined under the condition that the screen optical parameters of the first device are unknown, and then 3D images or 3D videos displayed by the first device are adjusted through the width of the viewpoint, so that the user watching experience is improved.
It should be noted that, after the second device photographs the first device at a position through two cameras disposed on the second device to obtain a group of images of the first device photographed at the position, the second device directly segments the images to obtain an image area, calculates a pixel average difference value corresponding to the image area, determines whether the pixel average difference value reaches a first preset value or a second preset value, if not, transforms the position of the second device, repeatedly executes the above steps until the pixel average difference value of the images of the first device photographed at a certain position reaches the first preset value, and when the first preset value is reached, sends a coordinate recording instruction to the first device, so that the first device returns to the corresponding position coordinate, and then continues to adjust the position to execute the above steps until the pixel average difference value of the images of the first device photographed at the other position reaches the second preset value, and obtains the position coordinates of the two positions from the first device as the first position coordinate and the second position coordinate.
The method for determining the viewpoint width according to the embodiment of the present application is described above with reference to fig. 1 from the perspective of the first device, and the method for determining the viewpoint width according to the embodiment of the present application is described below with reference to fig. 4 from the perspective of the second device.
Referring to fig. 4 in combination, fig. 4 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application, including:
401. the second device shoots the target image displayed by the first device in the stereoscopic mode in real time to obtain a first group of images and a second group of images.
In this embodiment, when the width of the viewpoint corresponding to the first device needs to be determined, the second device shoots the target image displayed by the first device in the stereoscopic mode in real time to obtain a first group of images and a second group of images, where the first group of images and the second group of images are obtained by shooting the target image by the second device through two cameras arranged on the second device at different positions, the two cameras of the second device are on the same horizontal line, the center distance between the two cameras is a preset distance, for example, 65mm, the second device is any terminal device with an image acquisition function and a communication function, the target image is a semi-black and semi-color image, and the colors in the semi-color refer to visible colors such as white, red, green, yellow and the like. That is, when it is required to determine the width of the viewpoint corresponding to the first device (i.e., the width of the viewpoint corresponding to the 3D film covered on the screen of the target device), the first device displays the half-black half-color image in the 3D mode, and then the second device photographs the first device at different positions, resulting in the first group image and the second group image.
402. The second device divides two images in the first group of images and two images in the second group of images respectively to obtain a first area, a second area, a third area and a fourth area.
In this embodiment, after the second device obtains the first set of images and the second set of images, the second device may divide two images in the first set of images to obtain the first area and the second area, and divide two images in the second set of images to obtain the third area and the fourth area, that is, since the two cameras provided on the second device capture the target image, the image captured by the single camera may include other contents except the target image, where the purpose of division is to ensure that only the target image is included in the first area, the second area, the third area and the fourth area, but not include other contents.
403. The second device calculates a first pixel average difference value of the first region and the second region and a pixel average difference value of the third region and the fourth region, respectively.
In this embodiment, after dividing the first set of images and the second set of images to obtain the first region, the second region, the third region and the fourth region, the second device may calculate a first pixel average difference value of the first region and the second region, and calculate a pixel average difference value of the third region and the fourth region, respectively. Specifically, the second device may calculate the first pixel average difference value and the second pixel average difference value by the following formula:
Wherein aver_pixl is the first pixel average difference value or the second pixel average difference value, A is the screen area, w is the width of the screen area, and h is the height of the screen area.
It will be appreciated that when the second device captures the target image to obtain the first set of images and the second set of images, the first set of images and the second set of images may not only include the target image, but may also include other contents, so that it is required to determine the average difference value of the pixels of the screen area, which is the display area of the target image in the screen of the first device, instead of the average difference value of the pixels of the first set of images. In addition, the first device may display the position coordinate of the second device in real time on the screen corresponding to the first device while displaying the target image, or directly display the position coordinate of the camera of the second device.
404. If the average difference value of the first pixels reaches a first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records the first position coordinate.
In this embodiment, after the second device analyzes the first group of images to obtain the first pixel average difference value, it may determine whether the first pixel average difference value reaches a first preset value, and if the first pixel average difference value reaches the first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records a first position coordinate according to the first coordinate recording instruction, where the first position coordinate is a coordinate of a position where the second device is located when the first group of images is captured. That is, the first device displays the position coordinates of the second device in real time, and when the second device determines that the average difference value of the first pixels reaches the first preset value, a coordinate recording instruction may be sent to the first device, and after receiving the first coordinate recording instruction, the first device may record the position coordinates of the second device at the position where the first group of images is captured.
It can be understood that when the average difference value of the first pixels does not reach the first preset value, the second device transforms the position to shoot the first device again to obtain an image shot after transforming the position, analyzes the image, and sends a coordinate recording instruction to the first device until the average difference value of the pixels of the image shot after transforming the position reaches the first preset value, so that the first device records the position coordinate of the position.
405. If the average difference value of the second pixels reaches a second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records the second position coordinate, and the width of the view point corresponding to the first device is determined according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first device.
In this embodiment, after the second device analyzes the second group of images to obtain a second pixel average difference value, it may determine whether the second pixel average difference value reaches a second preset value, if the second pixel average difference value reaches the second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records a second position coordinate, and determines a width of a view point corresponding to the first device according to the first position coordinate, the second position coordinate, and a fitting angle of a grating corresponding to the first device, where the second position coordinate is a coordinate of a position where the second device is located when the second device photographs the second group of images. That is, the first device displays the position coordinates of the second device in real time, when the second device determines that the second pixel average difference value reaches the second preset value, a coordinate recording instruction may be sent to the first device, and after receiving the coordinate recording instruction, the first device may record the position coordinates of the position, and calculate the width of the viewpoint according to the two position coordinates and the attaching angle.
It can be understood that when the average difference value of the second pixels does not reach the second preset value, the second device transforms the position to shoot the first device again to obtain an image shot after transforming the position, analyzes the image until the average difference value of the pixels of the image shot after transforming the position reaches the second preset value, and sends a coordinate recording instruction to the first device to enable the first device to record the position coordinate of the position.
It should be noted that after the second device shoots the first device at a position to obtain an image of the first device shot at the position, the second device directly analyzes the image to obtain a corresponding pixel average difference value, judges whether the pixel average difference value reaches a first preset value or a second preset value, if not, transforms the position of the second device to repeatedly execute the above steps until the pixel average difference value of the image of the first device shot at a certain position reaches the first preset value, and when the pixel average difference value reaches the first preset value, sends a coordinate recording instruction to the first device to enable the first device to record the corresponding position coordinates, then continues to adjust the position to execute the above steps until the pixel average difference value of the image of the first device shot at another position reaches the second preset value, sends the coordinate recording instruction to the first device to enable the first device to record the coordinates of the position, and calculates the width of the viewpoint according to the coordinates and the attaching angle of the two positions.
To sum up, it can be seen that in the embodiment provided by the present application, when determining the viewpoint width of the first device, the second device photographs the first device at different positions through two cameras disposed on the second device to obtain multiple groups of different images, and analyzes the multiple groups of different images to obtain the average difference value of pixels between two areas corresponding to the images photographed by the two cameras at the same time, and further when the average difference value of pixels reaches a preset value, sends a coordinate recording instruction to the first device, so that the first device records the position coordinates of the corresponding positions according to the coordinate recording instruction, and calculates the width of the viewpoint corresponding to the first device according to the position coordinates and the fitting angle of the grating corresponding to the first device. Therefore, the width of the view point corresponding to the first equipment can be quickly determined without knowing the screen optical parameters of the first equipment, and the 3D image or the 3D video displayed by the first equipment is adjusted through the view point width, so that the user watching experience is improved.
The method for determining the viewpoint width provided in the embodiment of the present application is described above from the perspective of the second device and the first device, respectively, and the method for determining the viewpoint width provided in the embodiment of the present application is described below with reference to fig. 5 from the perspective of interaction between the first device and the second device.
Referring to fig. 5, fig. 5 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application, including:
501. the first device displays the target image in a 3D mode.
502. And the second device shoots the target image displayed by the first device in the 3D mode in real time to obtain a first group of images and a second group of images.
503. The second device divides two images in the first group of images and two images in the second group of images respectively to obtain a first area, a second area, a third area and a fourth area.
505. The second device calculates a first pixel average difference value of the first region and the second region and a second pixel average difference value of the third region and the fourth region respectively.
It will be appreciated that steps 501 to 503 are similar to steps 401 to 404 in fig. 4, and are already described in detail in fig. 4, and are not repeated here.
506. If the average difference value of the first pixels reaches a first preset value, the second equipment sends a first coordinate recording instruction to the first equipment.
507. The first device records the first position coordinates according to the first coordinate recording instruction.
508. If the average difference value of the second pixels reaches a second preset value, the second equipment sends a second coordinate recording instruction to the first equipment.
509. The first device records the second position coordinates according to the second coordinate recording instruction.
It will be appreciated that steps 505 to 509 are similar to the steps of recording position coordinates in fig. 1 and 4, and are already described in detail in fig. 1 and 4, and are not described in detail here.
510. The first device determines the width of the view point corresponding to the first device according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first device.
It is to be understood that step 510 is similar to step 104 in fig. 1, and is described in detail in fig. 1, and is not described in detail herein.
To sum up, it can be seen that in the embodiment provided by the present application, when determining the viewpoint width of the first device, the second device photographs the first device at different positions through two cameras disposed on the second device to obtain multiple groups of different images, and analyzes the multiple groups of different images to obtain the average difference value of pixels between two areas corresponding to the images photographed by the two cameras at the same time, and further when the average difference value of pixels reaches a preset value, sends a coordinate recording instruction to the first device, so that the first device records the position coordinates of the corresponding positions according to the coordinate recording instruction, and calculates the width of the viewpoint corresponding to the first device according to the position coordinates and the fitting angle of the grating corresponding to the first device. Therefore, the width of the view point corresponding to the first equipment can be quickly determined without knowing the screen optical parameters of the first equipment, and the 3D image or the 3D video displayed by the first equipment is adjusted through the view point width, so that the user watching experience is improved.
It should be noted that in each embodiment described above, the width of the view point corresponding to the first device is calculated by using the first device according to the first position coordinate, the second position coordinate and the bonding angle of the grating as an example, or the first device may send the first position coordinate and the second position coordinate to the second device after recording to obtain the first position coordinate and the second position coordinate, and the second device calculates the width of the view point corresponding to the first device according to the first position coordinate, the second position coordinate and the bonding angle of the grating corresponding to the first device, and then sends the width of the view point corresponding to the first device. In addition, the analysis of the first set of images and the second set of images to obtain the pixel average difference value may also be performed by the first device, which is not limited herein.
It should be noted that, the pixel average difference calculation, the position calculation, and the viewpoint width calculation in fig. 1 to 5 have been described in detail, and the pixel average difference calculation, the position calculation, and the viewpoint width calculation are the same as those described in fig. 1 to 5, except that the execution subject is different, and detailed description thereof is omitted here.
The embodiments of the present application are described above from the viewpoint width determination method, and the embodiments of the present application are described below from the viewpoint width determination device.
Referring to fig. 6, fig. 6 is a schematic virtual structure diagram of a first device according to an embodiment of the present application, where the first device 600 includes:
a display unit 601, configured to display a target image in a stereoscopic mode, so that a second device shoots the target image displayed in the stereoscopic mode by a second device in real time, so as to obtain a first group of images and a second group of images, and divide two images in the first group of images and two images in the second group of images respectively, so as to obtain a first area, a second area, a third area and a fourth area, calculate a first pixel average difference value between the first area and the second area and a second pixel average difference value between the third area and the fourth area, where the first group of images and the second group of images are obtained by shooting the target image by two cameras arranged in the second device at different positions at the same time, and the center-to-center distances of the two cameras in the second device are preset distances, and the first area and the second area correspond to the two images in the first group and the second area;
A recording unit 602, configured to record, if the first device receives a first coordinate recording instruction sent by the second device when the first pixel average difference value reaches a first preset value, a first position coordinate when the second device captures the first group of images; if the first equipment receives a second coordinate recording instruction which is sent when the average difference value of the second pixels reaches a second preset value, recording and recording second position coordinates when the second equipment shoots the second group of images;
and the determining unit 603 is configured to determine a width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the fitting angle of the grating corresponding to the first device.
Referring to fig. 7, fig. 7 is a schematic diagram of a virtual structure of a second device according to an embodiment of the present application, where the second device 700 includes:
a shooting unit 701, configured to shoot a target image displayed by a first device in a stereoscopic mode in real time, so as to obtain a first group of images and a second group of images, where the first group of images and the second group of images are obtained by shooting the target image by two cameras arranged on the second device at different positions at the same time, the two cameras of the second device are in the same horizontal line, and a center-to-center distance between the two cameras of the second device is a preset distance;
A segmentation unit 702, configured to segment two images in the first set of images and two images in the second set of images respectively, to obtain a first region, a second region, a third region, and a fourth region, where the first region and the second region correspond to the two images in the first set of images, and the third region and the fourth region correspond to the two images in the second set of images;
a calculating unit 703, configured to calculate a first pixel average difference value of the first region and the second region and a second pixel average difference value of the third region and the fourth region, respectively;
a transceiver unit 704, configured to send a first coordinate recording instruction to the first device if the average difference value of the first pixels reaches a first preset value, so that the first device records a first coordinate position, where the first position coordinate is a coordinate of a position where the second device is located when the first group of images is captured; and if the average difference value of the second pixels reaches a second preset value, sending a second coordinate recording instruction to the first device so that the first device records a second position coordinate, and determining the width of the view point corresponding to the first device according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first device, wherein the second position coordinate is the coordinate of the position of the second device when the second device shoots the second group of images.
Next, another video width determining apparatus provided in the embodiments of the present application may be a terminal device, which is the first device or the second device described above, and referring to fig. 8, the terminal device 800 includes:
a receiver 801, a transmitter 802, a processor 803 and a memory 804 (where the number of processors 803 in the terminal device 800 may be one or more, one processor being an example in fig. 8). In some embodiments of the present application, the receiver 801, transmitter 802, processor 803, and memory 804 may be connected by a bus or other means, with the bus connection being exemplified in fig. 8.
Memory 804 may include read only memory and random access memory and provides instructions and data to the processor 803. A portion of the memory 804 may also include NVRAM. The memory 804 stores an operating system and operating instructions, executable modules or data structures, or a subset thereof, or an extended set thereof, where the operating instructions may include various operating instructions for performing various operations. The operating system may include various system programs for implementing various underlying services and handling hardware-based tasks.
The processor 803 controls the operation of the terminal device, the processor 803 may also be referred to as a CPU. In a specific application, the individual components of the terminal device are coupled together by a bus system, which may comprise, in addition to a data bus, a power bus, a control bus, a status signal bus, etc. For clarity of illustration, however, the various buses are referred to in the figures as bus systems.
The methods disclosed in the embodiments of the present application may be applied to the processor 803 or implemented by the processor 803. The processor 803 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry of hardware or instructions in software form in the processor 803. The processor 803 may be a general purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 804, and the processor 803 reads information in the memory 804, and in combination with the hardware, performs the steps of the above method.
In the embodiment of the present application, the processor 803 is configured to perform the operations performed by the first device and the second device.
The embodiment of the present application further provides a computer readable medium, which contains computer execution instructions, where the computer execution instructions can enable a server to execute the method for determining the viewpoint width described in the foregoing embodiment, and implementation principles and technical effects are similar, and are not repeated herein.
It should be further noted that the above-described apparatus embodiments are merely illustrative, and that the units described as separate units may or may not be physically separate, and that units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided by the application, the connection relation between the modules represents that the modules have communication connection therebetween, and can be specifically implemented as one or more communication buses or signal lines.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general purpose hardware, or of course may be implemented by dedicated hardware including application specific integrated circuits, dedicated CPUs, dedicated memories, dedicated components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions can be varied, such as analog circuits, digital circuits, or dedicated circuits. However, a software program implementation is a preferred embodiment in many cases for the present application. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random-access Memory (RAM, random Access Memory), a magnetic disk or an optical disk of a computer, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (8)

1. A method for determining a width of a viewpoint, comprising:
the method comprises the steps that a first device displays a target image in a stereoscopic mode, so that a second device shoots the target image displayed in the stereoscopic mode by the first device in real time to obtain a first group of images and a second group of images, the two images in the first group of images and the two images in the second group of images are respectively segmented to obtain a first area, a second area, a third area and a fourth area, a first pixel average difference value of the first area and the second area and a second pixel average difference value of the third area and the fourth area are calculated, wherein the first group of images and the second group of images are obtained by shooting the target image by the second device at different positions through two cameras arranged in the second device at the same time, the center-to-center distances of the two cameras in the second device are in the same horizontal line, the first area and the second area correspond to the first group of images, and the second area correspond to the second area and the fourth area of images;
If the first equipment receives a first coordinate recording instruction sent by the second equipment when the average difference value of the first pixels reaches a first preset value, the first equipment records a first position coordinate when the second equipment shoots the first group of images;
if the first device receives a second coordinate recording instruction sent when the average difference value of the second pixels reaches a second preset value, the first device records and records second position coordinates when the second device shoots the second group of images;
the first device determining the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first device, wherein the first device determining the first position of the second device according to the fitting angle and the first position coordinate comprises:
the first device calculates the first location by the following formula:
X 0 ′=x 0 +(y 0 -y)*tan(a);
wherein X is 0 ' is the first position, the first position coordinates are (x 0 ,y 0 ) Y is a preset constant, and a is the fitting angle;
the first device determining the second position of the second device according to the fitting angle and the second position coordinates includes:
The first device calculates the second location by the following formula:
X 1 ′=x 1 +(y 1 -y)*tan(a);
wherein X is 1 ' is the second position, the second position coordinates are (x 1 ,y 1 );
The first device determining a width of the viewpoint according to the first location and the second location includes:
the first device calculates the width of the viewpoint by the following formula:
VW=abs(X 0 ′-X 1 ′);
wherein VW is the width of the viewpoint and abs is an absolute function.
2. The method of claim 1, wherein the first device determining a width of a viewpoint corresponding to the first device from the first location coordinate and the second location coordinate comprises:
the first device determines a first position of the second device according to the attaching angle and the first position coordinate, wherein the first position is the position of the second device when the average difference value of the first pixels reaches the first preset value;
the first device determines a second position of the second device according to the attaching angle and the second position coordinate, wherein the second position is the position of the second device when the second pixel average difference value reaches the second preset value;
and determining the width of the viewpoint according to the first position and the second position.
3. The method according to any one of claims 1 to 2, further comprising:
acquiring the width of the grating;
determining the arrangement layout of the view points corresponding to the first equipment according to the width of the grating and the width of the view points;
and adjusting the stereoscopic image displayed when the first device operates in the stereoscopic mode according to the arrangement layout of the viewpoints and the human eye position change of the user.
4. A method for determining a viewpoint width, comprising:
the method comprises the steps that a second device shoots a target image displayed by a first device in a three-dimensional mode in real time to obtain a first group of images and a second group of images, wherein the first group of images and the second group of images are obtained by shooting the target image at different positions by two cameras arranged on the second device at the same time, the two cameras of the second device are positioned on the same horizontal line, and the center distance between the two cameras of the second device is a preset distance;
the second device divides two images in the first group of images and two images in the second group of images respectively to obtain a first area, a second area, a third area and a fourth area, wherein the first area and the second area correspond to the two images in the first group of images, and the third area and the fourth area correspond to the two images in the second group of images;
The second device calculates a first pixel average difference value of the first region and the second region and a second pixel average difference value of the third region and the fourth region respectively;
if the average difference value of the first pixels reaches a first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records a first coordinate position, wherein the first position coordinate is the coordinate of the position of the second device when the first group of images are shot;
if the average difference value of the second pixels reaches a second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records second position coordinates, and determines the width of the view point corresponding to the first device according to the first position coordinates, the second position coordinates and the fitting angle of the grating corresponding to the first device, wherein the second position coordinates are coordinates of the second device at the position when the second device shoots the second group of images, and the second device respectively calculates the average difference value of the first pixels of the first region and the second region and the average difference value of the second pixels of the third region and the fourth region, and the second pixel average difference value comprises:
The second device calculates the first pixel average difference value by the following formula:
wherein aver_piexl is the average difference value of the first pixels, w is the width of the first region, h is the height of the first region, al is the first region, ar is the second region, and the widths and heights of the first region and the second region are the same;
the second device calculates the second pixel mean difference value by the following formula:
wherein aver_piexl is the average difference value of the second pixels, w is the width of the third region, h is the height of the third region, al is the third region, ar is the fourth region, and the widths and heights of the third region and the fourth region are the same.
5. A device applying the viewpoint width determination method according to claim 4, the device being a first device, comprising:
the display unit is used for displaying a target image in a stereoscopic mode, so that a second device shoots the target image displayed in the stereoscopic mode by a first device in real time to obtain a first group of images and a second group of images, the two images in the first group of images and the two images in the second group of images are respectively segmented to obtain a first area, a second area, a third area and a fourth area, a first pixel average difference value of the first area and the second area and a second pixel average difference value of the third area and the fourth area are calculated, wherein the first group of images and the second group of images are obtained by shooting the target image by the second device at different positions through two cameras arranged in the second device at the same time, the center-to-center distances of the two cameras in the second device are preset distances, and the first area and the second area correspond to the two images in the first group and the second area;
The recording unit is used for recording a first position coordinate when the second device shoots the first group of images if the first device receives a first coordinate recording instruction sent by the second device when the first pixel average difference value reaches a first preset value; if the first equipment receives a second coordinate recording instruction which is sent when the average difference value of the second pixels reaches a second preset value, recording and recording second position coordinates when the second equipment shoots the second group of images;
and the determining unit is used for determining the width of the view point corresponding to the first equipment according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first equipment.
6. A device applying the viewpoint width determination method according to claim 4, the device being a second device, comprising:
the shooting unit is used for shooting a target image displayed by a first device in a stereoscopic mode in real time to obtain a first group of images and a second group of images, wherein the first group of images and the second group of images are obtained by shooting the target image at different positions by two cameras arranged on the second device at the same time, the two cameras of the second device are positioned on the same horizontal line, and the center distance between the two cameras of the second device is a preset distance;
The segmentation unit is used for respectively segmenting two images in the first group of images and two images in the second group of images to obtain a first region, a second region, a third region and a fourth region, wherein the first region and the second region correspond to the two images in the first group of images, and the third region and the fourth region correspond to the two images in the second group of images;
a calculating unit, configured to calculate a first pixel average difference value of the first region and the second region and a second pixel average difference value of the third region and the fourth region, respectively;
the receiving and transmitting unit is used for transmitting a first coordinate recording instruction to the first device if the average difference value of the first pixels reaches a first preset value, so that the first device records a first coordinate position, wherein the first position coordinate is the coordinate of the position of the second device when the first group of images are shot; and if the average difference value of the second pixels reaches a second preset value, sending a second coordinate recording instruction to the first device so that the first device records a second position coordinate, and determining the width of the view point corresponding to the first device according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first device, wherein the second position coordinate is the coordinate of the position of the second device when the second device shoots the second group of images.
7. A computer device, comprising:
at least one connected processor, a memory and a transceiver, wherein the memory is adapted to store program code, the processor being adapted to invoke the program code in the memory to perform the method of determining the view width of any of the above claims 1 to 3 and claim 4.
8. A computer storage medium, comprising:
instructions which, when executed on a computer, cause the computer to perform the method of determining a viewpoint width as claimed in any one of claims 1 to 3 and claim 4.
CN202111049335.9A 2021-09-08 2021-09-08 Viewpoint width determining method, device and storage medium Active CN113781560B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111049335.9A CN113781560B (en) 2021-09-08 2021-09-08 Viewpoint width determining method, device and storage medium
PCT/CN2022/117710 WO2023036218A1 (en) 2021-09-08 2022-09-08 Method and apparatus for determining width of viewpoint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111049335.9A CN113781560B (en) 2021-09-08 2021-09-08 Viewpoint width determining method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113781560A CN113781560A (en) 2021-12-10
CN113781560B true CN113781560B (en) 2023-12-22

Family

ID=78841632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111049335.9A Active CN113781560B (en) 2021-09-08 2021-09-08 Viewpoint width determining method, device and storage medium

Country Status (2)

Country Link
CN (1) CN113781560B (en)
WO (1) WO2023036218A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781560B (en) * 2021-09-08 2023-12-22 未来科技(襄阳)有限公司 Viewpoint width determining method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0707288A2 (en) * 1994-10-14 1996-04-17 Canon Kabushiki Kaisha Image processing method and apparatus
JP2019057908A (en) * 2017-09-20 2019-04-11 キヤノン株式会社 Imaging apparatus and control method thereof
CN110599602A (en) * 2019-09-19 2019-12-20 百度在线网络技术(北京)有限公司 AR model training method and device, electronic equipment and storage medium
CN112731343A (en) * 2020-12-18 2021-04-30 福建汇川物联网技术科技股份有限公司 Target measuring method and device of measuring camera
CN113286084A (en) * 2021-05-21 2021-08-20 展讯通信(上海)有限公司 Terminal image acquisition method and device, storage medium and terminal

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5978695B2 (en) * 2011-05-27 2016-08-24 株式会社Jvcケンウッド Autostereoscopic display device and viewpoint adjustment method
CN108259888A (en) * 2016-12-29 2018-07-06 深圳超多维光电子有限公司 The test method and system of stereo display effect
CN107885325B (en) * 2017-10-23 2020-12-08 张家港康得新光电材料有限公司 Naked eye 3D display method and control system based on human eye tracking
CN108683906B (en) * 2018-05-29 2021-04-20 张家港康得新光电材料有限公司 Naked eye 3D display parameter testing method, device, equipment and medium
CN110139095B (en) * 2019-05-14 2021-04-06 深圳市新致维科技有限公司 Naked eye 3D display module detection method and system and readable storage medium
CN113763472B (en) * 2021-09-08 2024-03-29 未来科技(襄阳)有限公司 Viewpoint width determining method and device and storage medium
CN113781560B (en) * 2021-09-08 2023-12-22 未来科技(襄阳)有限公司 Viewpoint width determining method, device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0707288A2 (en) * 1994-10-14 1996-04-17 Canon Kabushiki Kaisha Image processing method and apparatus
JP2019057908A (en) * 2017-09-20 2019-04-11 キヤノン株式会社 Imaging apparatus and control method thereof
CN110599602A (en) * 2019-09-19 2019-12-20 百度在线网络技术(北京)有限公司 AR model training method and device, electronic equipment and storage medium
CN112731343A (en) * 2020-12-18 2021-04-30 福建汇川物联网技术科技股份有限公司 Target measuring method and device of measuring camera
CN113286084A (en) * 2021-05-21 2021-08-20 展讯通信(上海)有限公司 Terminal image acquisition method and device, storage medium and terminal

Also Published As

Publication number Publication date
CN113781560A (en) 2021-12-10
WO2023036218A1 (en) 2023-03-16

Similar Documents

Publication Publication Date Title
US20210358154A1 (en) Systems and Methods for Depth Estimation Using Generative Models
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
US8345961B2 (en) Image stitching method and apparatus
US9521362B2 (en) View rendering for the provision of virtual eye contact using special geometric constraints in combination with eye-tracking
US8111910B2 (en) Stereoscopic image processing device, method, recording medium and stereoscopic imaging apparatus
US20160065948A1 (en) Methods, systems, and computer program products for creating three-dimensional video sequences
US7643070B2 (en) Moving image generating apparatus, moving image generating method, and program
CN109785390B (en) Method and device for image correction
US20070024710A1 (en) Monitoring system, monitoring apparatus, monitoring method and program therefor
US10535193B2 (en) Image processing apparatus, image synthesizing apparatus, image processing system, image processing method, and storage medium
US9380263B2 (en) Systems and methods for real-time view-synthesis in a multi-camera setup
JP2019003428A (en) Image processing device, image processing method, and program
CN109785225B (en) Method and device for correcting image
CN113781560B (en) Viewpoint width determining method, device and storage medium
CN113763472B (en) Viewpoint width determining method and device and storage medium
JP6622575B2 (en) Control device, control method, and program
WO2023221969A1 (en) Method for capturing 3d picture, and 3d photographic system
JP7154841B2 (en) IMAGING SYSTEM, IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM
KR101718309B1 (en) The method of auto stitching and panoramic image genertation using color histogram
JP2000184396A (en) Video processor, its control method and storage medium
CN110544317A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN114143442B (en) Image blurring method, computer device, and computer-readable storage medium
JP2016220148A (en) Control apparatus, control method, and system
CN117459666A (en) Image processing method and image processor
TW202215113A (en) Virtual and real image fusion method, virtual and real image fusion system and non-transient computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant