CN113763472B - Viewpoint width determining method and device and storage medium - Google Patents

Viewpoint width determining method and device and storage medium Download PDF

Info

Publication number
CN113763472B
CN113763472B CN202111048861.3A CN202111048861A CN113763472B CN 113763472 B CN113763472 B CN 113763472B CN 202111048861 A CN202111048861 A CN 202111048861A CN 113763472 B CN113763472 B CN 113763472B
Authority
CN
China
Prior art keywords
image
pixel mean
mean value
coordinate
width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111048861.3A
Other languages
Chinese (zh)
Other versions
CN113763472A (en
Inventor
贺曙
徐万良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Future Technology Xiang Yang Co ltd
Original Assignee
Future Technology Xiang Yang Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Future Technology Xiang Yang Co ltd filed Critical Future Technology Xiang Yang Co ltd
Priority to CN202111048861.3A priority Critical patent/CN113763472B/en
Publication of CN113763472A publication Critical patent/CN113763472A/en
Application granted granted Critical
Publication of CN113763472B publication Critical patent/CN113763472B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The application provides a method for determining viewpoint width and related equipment, wherein the width of a viewpoint corresponding to the equipment is rapidly determined under the condition that the screen optical parameters of the equipment are unknown. The method comprises the following steps: the method comprises the steps that a first device displays a target image in a three-dimensional mode, so that a second device shoots the target image in real time to obtain a first image and a second image, the first image and the second image are analyzed to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, and the first image and the second image are obtained by shooting the target image at different positions by the second device; if the first equipment receives a first coordinate recording instruction, recording a first position coordinate; if the first equipment receives a second coordinate recording instruction, recording a second position coordinate; and the first equipment determines the width of the view point corresponding to the first equipment according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first equipment.

Description

Viewpoint width determining method and device and storage medium
[ field of technology ]
The application belongs to the field of naked eye 3D, and particularly relates to a method and device for determining viewpoint width and a storage medium.
[ background Art ]
Naked eye 3D, autostereicoscopy for short, is a generic term for technologies that implement stereoscopic effects without external tools such as polarized glasses.
In a naked eye 3D system with human eye tracking, equipment acquires images through a front-end camera and tracks the positions of human eyes, then a viewpoint corresponding to the current positions of the human eyes is calculated, and the width of each viewpoint in the naked eye 3D system needs to be determined in the process of acquiring the images through the arranged front-end camera and tracking the positions of the human eyes.
At present, the width of the viewpoint is mainly deduced through optical design, but in actual use, a grating is often used on a plurality of third party devices, and screen optical parameters of the device, such as glass thickness, optical adhesive thickness, assembly gap size, and the like, cannot be obtained exactly, so that the width of the viewpoint cannot be determined accurately.
[ invention ]
The purpose of the application is to provide a method, a device and a storage medium for determining the width of a viewpoint, which can quickly determine the width of the viewpoint corresponding to equipment under the condition that the screen optical parameters of the equipment are unknown, and further adjust a 3D image or a 3D video displayed by the equipment through the width of the viewpoint, so that the watching experience of a user is improved.
An embodiment of the present application provides a method for determining a viewpoint width, including:
the method comprises the steps that a first device displays a target image in a three-dimensional mode, so that a second device shoots the target image in real time to obtain a first image and a second image, the first image and the second image are analyzed to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, the first image and the second image are obtained by shooting the target image at different positions by the second device, the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
if the first equipment receives a first coordinate recording instruction sent by the second equipment when the first pixel mean value reaches a first preset value, the first equipment records a first position coordinate of a position where the second equipment is located when shooting the first image;
if the first device receives a second coordinate recording instruction sent by the second device when the second pixel mean value reaches a second preset value, the first device records a second position coordinate of a position where the second device is located when shooting the second image;
And the first equipment determines the width of the view point corresponding to the first equipment according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first equipment.
In one possible design, the determining, by the first device, the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the fitting angle of the grating corresponding to the first device includes:
the first device determines a first position of the second device according to the attaching angle and the first position coordinate, wherein the first position is a position where the second device is located when the first pixel mean value reaches the first preset value;
the first device determines a second position of the second device according to the attaching angle and the second position coordinate, wherein the second position is a position where the second device is located when the second pixel mean value reaches the second preset value;
the first device determines a width of the viewpoint according to the first position and the second position.
In one possible design, the first device determining the first position of the second device according to the fitting angle and the first position coordinate includes:
The first device calculates the first location by the following formula:
X 0 ′=x 0 +(y 0 -y)*tan(a);
wherein X is 0 ' is the first position, the first position coordinates are (x 0 ,y 0 ) Y is a preset constant, and a is the fitting angle;
the first device determining the second position of the second device according to the fitting angle and the second position coordinates includes:
the first device calculates the second location by the following formula:
X 1 ′=x 1 +(y 1 -y)*tan(a);
wherein X is 1 ' is the second position, the second position coordinates are (x 1 ,y 1 );
The first device determining a width of the viewpoint according to the first location and the second location includes:
the first device calculates the width of the viewpoint by the following formula:
VW=abs(X 0 ′-X 1 ′);
wherein VW is the width of the viewpoint and abs is an absolute function.
In one possible design, the method further comprises:
acquiring the width of the grating;
determining the arrangement layout of the view points corresponding to the first equipment according to the width of the grating and the width of the view points;
and adjusting the stereoscopic image displayed when the first device operates in the stereoscopic mode according to the arrangement layout of the viewpoints and the human eye position change of the user.
A second aspect of the embodiments of the present application provides a method for determining a viewpoint width, including:
The second device shoots a target image displayed by the first device in a three-dimensional mode in real time to obtain a first image and a second image, wherein the first image and the second image are obtained by shooting the target image at different positions by the second device;
the second device analyzes the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, wherein the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
if the first pixel mean value reaches a first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records a first position coordinate, wherein the first position coordinate is the coordinate of the position of the second device when the first image is shot;
if the second pixel mean value reaches a second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records a second position coordinate, the width of a view point corresponding to the first device is determined according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first device, and the second position coordinate is the coordinate of the position where the second device is located when shooting the second image.
In one possible design, the second device analyzing the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device includes:
the second device calculates the first pixel mean and the second pixel mean by the following formula:
wherein aver_pixl is the first pixel mean value or the second pixel mean value, a is the screen area, w is the width of the screen area, and h is the height of the screen area.
A third aspect of the embodiments of the present application provides an apparatus, where the apparatus is a first apparatus, and the first apparatus includes:
the display unit is used for displaying a target image in a stereoscopic mode, so that a second device shoots the target image in real time to obtain a first image and a second image, the first image and the second image are analyzed to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, the first image and the second image are obtained by shooting the target image at different positions by the second device, the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
The recording unit is used for recording a first position coordinate of a position where the second equipment is located when the second equipment shoots the first image if a first coordinate recording instruction sent by the second equipment when the first pixel mean value reaches a first preset value is received;
the recording unit is further configured to record a second position coordinate of a position where the second device is located when the second device captures the second image if a second coordinate recording instruction sent by the second device when the second pixel mean value reaches a second preset value is received;
and the determining unit is used for determining the width of the view point corresponding to the first equipment according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first equipment.
In a possible design, the determining unit is specifically configured to:
determining a first position of the second device according to the fitting angle and the first position coordinate, wherein the first position is a position where the second device is located when the first pixel mean value reaches the first preset value;
determining a second position of the second device according to the fitting angle and the second position coordinate, wherein the second position is a position where the second device is located when the second pixel mean value reaches the second preset value;
And determining the width of the viewpoint according to the first position and the second position.
In one possible design, the determining unit determining the first position of the second device according to the fitting angle and the first position coordinate includes:
the first position is calculated by the following formula:
X 0 ′=x 0 +(y 0 -y)*tan(a);
wherein X is 0 ' is the first position, the first position coordinates are (x 0 ,y 0 ) Y is a preset constant, and a is the fitting angle;
the determining unit determining the second position of the second device according to the fitting angle and the second position coordinate includes:
the second position is calculated by the following formula:
X 1 ′=x 1 +(y 1 -y)*tan(a);
wherein X is 1 ' is the second position, the second position coordinates are (x 1 ,y 1 );
The determining unit determining the width of the viewpoint according to the first position and the second position includes:
the width of the viewpoint is calculated by the following formula:
VW=abs(X 0 ′-X 1 ′);
wherein VW is the width of the viewpoint and abs is an absolute function.
In a possible design, the determining unit is further configured to:
acquiring the width of the grating;
determining the arrangement layout of the view points corresponding to the first equipment according to the width of the grating and the width of the view points;
And adjusting the stereoscopic image displayed when the first device operates in the stereoscopic mode according to the arrangement layout of the viewpoints and the human eye position change of the user.
A fourth aspect of the embodiments of the present application provides an apparatus, where the apparatus is a second apparatus, and the second apparatus includes:
the shooting unit is used for shooting a target image displayed by a first device in a stereoscopic mode in real time to obtain a first image and a second image, wherein the first image and the second image are obtained by shooting the target image at different positions by the second device;
the analysis unit is used for respectively analyzing the first image and the second image to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, wherein the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
the first sending unit is used for sending a first coordinate recording instruction to the first device if the first pixel mean value reaches a first preset value, so that the first device records a first position coordinate, wherein the first position coordinate is the coordinate of the position of the second device when the first image is shot;
And the second sending unit is used for sending a second coordinate recording instruction to the first equipment if the second pixel mean value reaches a second preset value, so that the first equipment records a second position coordinate, and determining the width of the view point corresponding to the first equipment according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first equipment, wherein the second position coordinate is the coordinate of the position where the second equipment is located when shooting the second image.
In a possible design, the analysis unit is specifically configured to:
the second device calculates the first pixel mean and the second pixel mean by the following formula:
wherein aver_pixl is the first pixel mean value or the second pixel mean value, a is the screen area, w is the width of the screen area, and h is the height of the screen area.
A fifth aspect of the embodiments of the present application provides a computer device, which includes at least one connected processor, a memory, and a transceiver, where the memory is configured to store program code, and the processor is configured to invoke the program code in the memory to perform the steps of the method for determining a viewpoint width in the foregoing aspects.
A sixth aspect of the embodiments of the present application provides a computer storage medium including instructions which, when run on a computer, cause the computer to perform the steps of the method for determining a viewpoint width described in the above aspects.
Compared with the related art, in the embodiment provided by the application, when the viewpoint width of the first device is determined, the second device can shoot the first device at different positions to obtain a plurality of images, analyze the images, and further respectively correspond to the images, when the pixel mean reaches a preset value, the first device records the position coordinate of the second device when the pixel mean reaches the preset value, and then the first device calculates the width of the viewpoint corresponding to the first device according to the position coordinate of the second device at different positions and the fitting angle of the grating, so that the width of the viewpoint corresponding to the first device can be rapidly determined under the condition that the screen optical parameters of the first device are unknown, and then 3D images or 3D videos displayed by the first device are adjusted through the viewpoint width, so that the user viewing experience is improved.
[ description of the drawings ]
Fig. 1 is a schematic diagram of an embodiment of a method for determining a viewpoint width according to an embodiment of the present application;
Fig. 2 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 3 is an application scenario schematic diagram of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 4 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 5 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application;
fig. 6 is a schematic virtual structure of a first device according to an embodiment of the present application;
fig. 7 is a schematic virtual structure of a second device according to an embodiment of the present application;
fig. 8 is a schematic hardware structure of a first device and a second device provided in an embodiment of the present application.
[ detailed description ] of the invention
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those explicitly listed but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that the division of modules by such means may occur in the present application by only one logical division, such that a plurality of modules may be combined or integrated in another system, or some feature vectors may be omitted, or not implemented, and further such that the coupling or direct coupling or communication connection between such displayed or discussed modules may be through some interfaces, such that indirect coupling or communication connection between such modules may be electrical or other similar, none of which are intended to be limiting in this application. The modules or sub-modules described as separate components may or may not be physically separate, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purposes of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of an embodiment of a method for determining a viewpoint width according to an embodiment of the present application, including:
101. the first device displays a target image in a stereoscopic mode, so that the second device shoots the target image to obtain a first image and a second image, and analyzes the first image and the second image to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device.
In this embodiment, when the width of the viewpoint corresponding to the first device needs to be determined, the first device displays the target image in a stereoscopic mode, so that the second device shoots the target image displayed by the first device in the stereoscopic mode in real time to obtain a first image and a second image, and analyzes the first image and the second image to obtain a first pixel mean value and a second pixel mean value of the screen area corresponding to the first device, where the first image and the second image are obtained by shooting the target image by the second device at different positions, the second device is any terminal device with an image acquisition function and a communication function, the target image is a semi-black and semi-color image, and the colors in the semi-color refer to visible colors such as white, red, green, yellow and the like. It will be appreciated that when the second device captures the target image to obtain the first image and the second image, the first image and the second image may not only include the target image, but may also include other contents, so that it is required to determine the pixel mean value of the screen area, which is the display area of the target image in the screen of the first device, instead of the pixel mean value of the first image. In addition, the first device may display the position coordinate of the second device in real time on the screen corresponding to the first device while displaying the target image, or directly display the position coordinate of the camera of the second device.
102. If a first coordinate recording instruction sent by the second equipment when the first pixel mean value reaches a first preset value is received, the first equipment records a first position coordinate of a position where the second equipment is located when the second equipment shoots a first image.
In this embodiment, after the second device analyzes the first image to obtain the first pixel average value, the second device may determine whether the first pixel average value reaches a first preset value, if the first pixel average value reaches the first preset value, the second device may send a first coordinate recording instruction, where the first device records, according to the first coordinate recording instruction, a first position coordinate of a position where the second device is located when the second device captures the first image.
103. And if a second coordinate recording instruction sent by the second equipment when the second pixel mean value reaches a second preset value is received, the first equipment records second position coordinates of the position where the second equipment is positioned when the second equipment shoots a second image.
In this embodiment, after the second device analyzes the second image to obtain the second pixel average value, the second device may determine whether the second pixel average value reaches a second preset value, if the second pixel average value reaches the second preset value, the second device may send a second coordinate recording instruction, and the first device records, according to the received coordinate recording instruction, a second position coordinate of a position where the second device is located when the second device captures the second image.
104. The first device determines the width of the view point corresponding to the first device according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first device.
In this embodiment, after the first position coordinates and the second position coordinates are recorded, the first device may obtain the bonding angle of the grating corresponding to the first device (the bonding angle is the bonding angle of the grating of the 3D film bonded on the first device, and the manner of obtaining the bonding angle of the grating is not limited herein, for example, may be input by a user), and determine the width of the viewpoint corresponding to the first device according to the first position coordinates, the second position coordinates and the bonding angle.
In one embodiment, determining, by the first device, the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the fitting angle of the grating corresponding to the first device includes:
the first equipment determines a first position of the second equipment according to the attaching angle and the first position coordinate, wherein the first position is the position of the second equipment when the first pixel mean value reaches a first preset value;
the first device determines a second position of the second device according to the attaching angle and the second position coordinate, wherein the second position is a position where the second device is located when the second pixel mean value reaches a second preset value;
The first device determines a width of the viewpoint based on the first location and the second location.
In this embodiment, the second device may calculate, based on the fitting angle and the first position coordinate, a first position according to the following formula, where the second device captures a first image when the first pixel mean value reaches a first preset value:
X 0 ′=x 0 +(y 0 -y)*tan(a);
wherein X is 0 ' is the first position, the first position coordinates are (x 0 ,y 0 ) Y is a preset constant, and a is a fitting angle;
and then the second device can calculate a second position according to the following formula based on the fitting angle and the second position coordinate, wherein the second position is a position where the second device shoots a second image when the second pixel mean value reaches a second preset value:
X 1 ′=x 1 +(y 1 -y)*tan(a);
wherein X is 1 ' is the second position, and the second position coordinates are (x 1 ,y 1 );
After the second device calculates the first position and the second position according to the formula, the width of the viewpoint corresponding to the first device may be calculated according to the following formula based on the first position and the second position:
VW=abs(X 0 ′-X 1 ′);
wherein VW is the width of the viewpoint corresponding to the first device, and abs is an absolute function.
The width calculation of the viewpoint will be described with reference to fig. 2, where fig. 2 is a schematic view of the width calculation of the viewpoint according to the embodiment of the present application, and 201 is a first position coordinate (x 0 ,y 0 ) 202 is the second position coordinate (x 1 ,y 1 ) 203 are coordinates of a preset constant y in a coordinate system (it will be understood that the preset constant y may be set to be half of the width corresponding to the screen area, or may be set according to practical situations, and is not limited specifically), so as to calculate the first position X 0 ' illustrated as an example, in calculating the first position X 0 When' the fitting angle a of the grating is known, the first position coordinate (x 0 ,y 0 ) Then, the preset constant 203 is converted to the same direction as the Y-axis direction of the first position coordinate, and then the formula X can be adopted 0 ′=x 0 +(y 0 -y) tan (a) calculating the first position, and similarly the second position, then vw=abs (X) 0 ′-X 1 ') calculating to obtain the absolute value of the difference between the first position and the second position, namely the width of the view point corresponding to the first device.
It should be noted that, after the width of the viewpoint corresponding to the first device is obtained by the first device, the 3D image or the 3D video that is displayed when the first device operates in the 3D mode may be adjusted based on the width of the viewpoint, and when the 3D image or the 3D video that is displayed when the first device operates in the 3D mode is adjusted based on the width of the viewpoint, the specific first device may obtain the width of the grating corresponding to the first device, then determine the arrangement layout of the viewpoint corresponding to the first device according to the width of the grating and the width of the viewpoint, and adjust the stereoscopic image that is displayed when the first device operates in the stereoscopic mode according to the arrangement layout of the viewpoint and the eye position change of the user. That is, after the width of the viewpoint is obtained, since the width of the grating of the first device is known, the arrangement layout of the grating of the 3D film attached to the screen of the first device can be calculated, and then the 3D image displayed or the 3D video played when the first device operates in the stereoscopic mode can be adjusted according to the position change of human eyes, so that a better 3D display effect is provided for the user.
It should be noted that, the position of the first device is unchanged, the second device transforms the position of the second device, and the first device tracks the position coordinates of the camera of the second device, and records the position coordinates of the second device when transforming the position, and of course, other modes may also be used, for example, the position of the second device is unchanged, and the first device transforms the position to record the position coordinates of the camera of the second device, so long as the position coordinates of the second device when capturing images of the first device at different positions can be recorded. It can be understood that, when the position of the detector is unchanged and the human eye coordinates of the detector are recorded by changing the position of the target device, the specific implementation process is as follows:
when the first device displays a target image in a 3D mode, the second device shoots the first device, the position coordinates of the second device (of course, the position coordinates of a camera of the second device can also be displayed, and the position is not limited in detail) are displayed on a screen of the first device, the first device adjusts the position until a second device coordinate recording instruction is received, the position coordinates of the second device at the current position are recorded, the coordinate recording instruction is sent out when the second device analyzes the image obtained by shooting the first device to obtain a corresponding pixel mean value, and the pixel mean value reaches a first preset value or a second preset value; then, continuously adjusting the position until a coordinate recording instruction sent by the second equipment is received again, and recording the position coordinate of the second equipment at the current position, wherein the coordinate recording instruction is sent out by the second equipment when the second equipment analyzes an image obtained by shooting the first equipment to obtain a corresponding pixel mean value, and the pixel mean value reaches a first preset value or a second preset value; therefore, the position coordinates of two different positions can be obtained, and the width of the viewpoint is calculated according to the position coordinates of the two different positions and the attaching angle.
To sum up, it can be seen that, in the embodiment provided by the present application, when determining the viewpoint width of the first device, the second device may capture the first device at different positions to obtain a plurality of images, analyze the plurality of images, and further may be the pixel mean value corresponding to the plurality of images respectively, and when the pixel mean value reaches the preset value, record, by the first device, the position coordinate of the second device when reaching the preset value, and then calculate the width of the viewpoint corresponding to the first device according to the position coordinate of the second device at different positions and the fitting angle of the grating, so that the width of the viewpoint corresponding to the first device can be quickly determined under the condition that the screen optical parameter of the first device is unknown, and then adjust the 3D image or the 3D video displayed by the first device through the viewpoint width, so as to improve the viewing experience of the user.
Referring to fig. 3, fig. 3 is a schematic view of an application scenario provided in this embodiment, in fig. 3, a position of a first device is fixed, a position of a second device is changed, and a position coordinate of a position where the second device is located when a pixel mean value of an image shot by the second device reaches a preset value is recorded by the first device, as shown in fig. 3, when a viewpoint width of a 3D film set on the first device 301 needs to be determined, the first device 301 displays a target image in a 3D mode, the second device shoots the target image displayed in the 3D mode at different positions of the first device, obtains a corresponding image, analyzes the image to obtain a corresponding pixel mean value, then determines whether the pixel mean value reaches the first preset value or the second preset value, if the pixel mean value reaches the first preset value or the second preset value, after the coordinate recording instruction is sent to the first device 301, the first device 301 records the position coordinate, if the obtained first pixel mean value of the first device 301 is in a 302 position, and the first device 301 can record the position coordinate at the first position when the first device 301 receives the instruction; then the second device changes the position again, and shoots the first device again to obtain a corresponding image, analyzes the image to obtain a corresponding pixel mean value, then judges whether the pixel mean value reaches a second preset value, if the pixel mean value reaches the second preset value, the second device sends a coordinate recording instruction to the first device 301, at this time, the first device 301 records the second position coordinate according to the coordinate recording instruction, for example, when the second device is at the 303 position in fig. 3, the pixel mean value of the image shot by the first device 301 reaches the second preset value, at this time, the second device sends an instruction for recording the position coordinate to the first device 301, the first device 301 records the position coordinate when the second device is at the 303 position according to the coordinate recording instruction, and then the first device 301 can calculate the width of the viewpoint corresponding to the first device 301 according to the position coordinate when the second device is at the 302 position, the position coordinate when the second device is at the 303 position, and the fitting angle of the grating corresponding to the first device 301, and further adjusts the 3D image or the 3D video displayed by the first device 301 in the 3D mode according to the width of the viewpoint. Therefore, the width of the viewpoint corresponding to the first device can be rapidly determined under the condition that the screen optical parameters of the first device are unknown, and then 3D images or 3D videos displayed by the first device are adjusted through the width of the viewpoint, so that the user watching experience is improved.
It should be noted that after the second device shoots the first device at a position to obtain an image of the first device shot at the position, the second device may directly analyze the image to obtain a corresponding pixel mean value, determine whether the pixel mean value reaches a first preset value or a second preset value, if not, transform the position of the second device, repeatedly execute the above steps until the pixel mean value of the image of the first device shot at a certain position reaches the first preset value or the second preset value, and send a coordinate recording instruction to the first device when the pixel mean value of the image of the first device shot at a certain position reaches the first preset value or the second preset value, so that the first device returns to the corresponding position coordinate, and then continue to adjust the position to execute the above steps until the pixel mean value of the image of the first device shot at another position reaches the other preset value, and obtain the position coordinates of the two positions from the first device as the first position coordinate and the second position coordinate.
The method for determining the viewpoint width according to the embodiment of the present application is described above with reference to fig. 1 from the perspective of the first device, and the method for determining the viewpoint width according to the embodiment of the present application is described below with reference to fig. 4 from the perspective of the second device.
Referring to fig. 4 in combination, fig. 4 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application, including:
401. the second device shoots the target image displayed by the first device in the stereoscopic mode in real time to obtain a first image and a second image.
In this embodiment, when the width of the viewpoint corresponding to the first device needs to be determined, the second device photographs the target image displayed by the first device in the stereoscopic mode in real time to obtain a first image and a second image, where the first image and the second image are obtained by photographing the target image by the second device at different positions, the second device is any terminal device with an image acquisition function and a communication function, the target image is a semi-black and semi-color image, and the color in the semi-color refers to visible colors such as white, red, green, yellow, and the like. That is, when the width of the viewpoint corresponding to the first device (i.e., the width of the viewpoint corresponding to the 3D film covered on the screen of the target device) needs to be determined, the first device displays the half-black half-color image in the 3D mode, and then the second device photographs the first device at different positions to obtain the first image and the second image.
402. And the second device analyzes the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device.
In this embodiment, after the second device photographs the first device at different positions to obtain the first image and the second image, the first image and the second image may be respectively analyzed to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, where the first pixel mean value corresponds to the first image, the second pixel mean value corresponds to the second image, and the screen area is an area where the first device displays the target image. Specifically, the second device may calculate the first pixel mean value and the second pixel mean value by the following formula:
wherein aver_pixl is the first pixel mean value or the second pixel mean value, A is the screen area, w is the width of the screen area, and h is the height of the screen area.
It will be appreciated that when the second device captures the target image to obtain the first image and the second image, the first image and the second image may not only include the target image, but may also include other contents, so that it is required to determine the pixel mean value of the screen area, which is the display area of the target image in the screen of the first device, instead of the pixel mean value of the first image. In addition, the first device may display the position coordinate of the second device in real time on the screen corresponding to the first device while displaying the target image, or directly display the position coordinate of the camera of the second device.
403. If the first pixel mean value reaches a first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records a first position coordinate.
In this embodiment, after the second device analyzes the first image to obtain the first pixel average value, it may determine whether the first pixel average value reaches a first preset value, and if the first pixel average value reaches the first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records a first position coordinate according to the first coordinate recording instruction, where the first position coordinate is a coordinate of a position where the second device is located when the first image is captured. That is, the first device displays the position coordinate of the second device in real time, when the second device determines that the first pixel mean value reaches the first preset value, a coordinate recording instruction may be sent to the first device, and after receiving the first coordinate recording instruction, the first device may record the position coordinate of the position where the second device captures the first image.
It can be understood that when the first pixel mean value does not reach the first preset value, the second device transforms the position to shoot the first device again, obtains an image shot after transforming the position, analyzes the image, and sends a coordinate recording instruction to the first device until the pixel mean value of the image shot after transforming the position reaches the first preset value, so that the first device records the position coordinate of the position.
404. If the second pixel mean value reaches a second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records the second position coordinate, and the width of the view point corresponding to the first device is determined according to the first position coordinate, the second position coordinate and the fitting angle of the grating corresponding to the first device.
In this embodiment, after the second device analyzes the second image to obtain the second pixel mean value, it may determine whether the second pixel mean value reaches a second preset value, if the second pixel mean value reaches the second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records a second position coordinate, and determines a width of a view point corresponding to the first device according to the first position coordinate, the second position coordinate, and a fitting angle of a grating corresponding to the first device, where the second position coordinate is a coordinate of a position where the second device is located when the second device photographs the second image. That is, the first device displays the position coordinates of the second device in real time, when the second device determines that the second pixel mean value reaches the second preset value, a coordinate recording instruction may be sent to the first device, and after receiving the coordinate recording instruction, the first device may record the position coordinates of the position, and calculate the width of the viewpoint according to the two position coordinates and the fitting angle.
It can be understood that when the second pixel mean value does not reach the second preset value, the second device transforms the position to shoot the first device again, so as to obtain an image shot after transforming the position, analyzes the image until the pixel mean value of the image shot after transforming the position reaches the second preset value, and sends a coordinate recording instruction to the first device, so that the first device records the position coordinate of the position.
It should be noted that after the second device shoots the first device at a position to obtain an image of the first device shot at the position, the second device directly analyzes the image to obtain a corresponding pixel mean value, judges whether the pixel mean value reaches a first preset value or a second preset value, if not, transforms the position of the second device, repeatedly executes the steps until the pixel mean value of the image of the first device shot at a certain position reaches the first preset value, and when the pixel mean value reaches the first preset value, sends a coordinate recording instruction to the first device so that the first device records the corresponding position coordinates, then continues to adjust the position to execute the steps until the pixel mean value of the image of the first device shot at another position reaches the second preset value, sends the coordinate recording instruction to the first device so that the first device records the coordinates of the position, and calculates the width of the viewpoint according to the coordinates and the fitting angle of the two positions.
To sum up, it can be seen that, in the embodiment provided by the present application, when determining the viewpoint width of the first device, the second device may capture the first device at different positions to obtain a plurality of images, analyze the plurality of images, and further may respectively correspond to the pixel mean values of the plurality of images, and further send a coordinate recording instruction to the first device when the pixel mean values reach a preset value, so that the first device records the position coordinates of the corresponding positions according to the coordinate recording instruction, and calculates the width of the viewpoint corresponding to the first device according to the position coordinates and the fitting angle of the grating corresponding to the first device. Therefore, the width of the view point corresponding to the first equipment can be quickly determined without knowing the screen optical parameters of the first equipment, and the 3D image or the 3D video displayed by the first equipment is adjusted through the view point width, so that the user watching experience is improved.
The method for determining the viewpoint width provided in the embodiment of the present application is described above from the perspective of the second device and the first device, respectively, and the method for determining the viewpoint width provided in the embodiment of the present application is described below with reference to fig. 5 from the perspective of interaction between the first device and the second device.
Referring to fig. 5, fig. 5 is a schematic diagram of another embodiment of a method for determining a viewpoint width according to an embodiment of the present application, including:
501. the first device displays the target image in a 3D mode.
502. And the second device shoots the target image displayed by the first device in the 3D mode in real time to obtain a first image and a second image.
503. And the second device analyzes the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device.
It will be appreciated that steps 501 to 503 are similar to steps 401 to 402 in fig. 4, and are already described in detail in fig. 4, and are not repeated here.
504. If the first pixel mean value reaches a first preset value, the second device sends a first coordinate recording instruction to the first device.
505. The first device records the first position coordinates according to the first coordinate recording instruction.
506. If the second pixel mean value reaches a second preset value, the second device sends a second coordinate recording instruction to the first device.
507. The first device records the second position coordinates according to the second coordinate recording instruction.
It is to be understood that steps 504 to 507 are similar to the steps of recording position coordinates in fig. 1 and 4, and are already described in detail in fig. 1 and 4, and detailed descriptions thereof are omitted here.
508. The first device determines the width of the view point corresponding to the first device according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first device.
It will be appreciated that step 508 is similar to step 104 in fig. 1, and is described in detail in fig. 1, and is not repeated here.
To sum up, it can be seen that, in the embodiment provided by the present application, when determining the viewpoint width of the first device, the second device may capture the first device at different positions to obtain a plurality of images, analyze the plurality of images, and further may be the pixel mean value corresponding to the plurality of images respectively, and when the pixel mean value reaches the preset value, record, by the first device, the position coordinate of the second device when reaching the preset value, and then calculate the width of the viewpoint corresponding to the first device according to the position coordinate of the second device at different positions and the fitting angle of the grating, so that the width of the viewpoint corresponding to the first device can be quickly determined under the condition that the screen optical parameter of the first device is unknown, and then adjust the 3D image or the 3D video displayed by the first device through the viewpoint width, so as to improve the viewing experience of the user.
It should be noted that in each embodiment described above, the width of the view point corresponding to the first device is calculated by using the first device according to the first position coordinate, the second position coordinate and the bonding angle of the grating as an example, or the first device may send the first position coordinate and the second position coordinate to the second device after recording to obtain the first position coordinate and the second position coordinate, and the second device calculates the width of the view point corresponding to the first device according to the first position coordinate, the second position coordinate and the bonding angle of the grating corresponding to the first device, and then sends the width of the view point corresponding to the first device. In addition, the analysis of the first image and the second image to obtain the pixel average value may also be performed by the first device, which is not limited herein.
It should be noted that, the pixel mean value calculation, the position calculation, and the viewpoint width calculation have been described in detail in fig. 1 to 5, and the pixel mean value calculation, the position calculation, and the viewpoint width calculation are the same as those described in fig. 1 to 5, except that the execution subject is different, and detailed description thereof is omitted here.
The embodiments of the present application are described above from the viewpoint width determination method, and the embodiments of the present application are described below from the viewpoint width determination device.
Referring to fig. 6, fig. 6 is a schematic virtual structure diagram of a first device according to an embodiment of the present application, where the first device 600 includes:
the display unit 601 is configured to display a target image in a stereoscopic mode, so that a second device shoots the target image in real time to obtain a first image and a second image, analyze the first image and the second image to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, where the first image and the second image are obtained by shooting the target image by the second device at different positions, the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
a recording unit 602, configured to record a first position coordinate of a position where the second device is located when the second device captures the first image if a first coordinate recording instruction sent by the second device when the first pixel mean value reaches a first preset value is received;
the recording unit 602 is further configured to record a second position coordinate of a position where the second device is located when the second device captures the second image if a second coordinate recording instruction sent by the second device when the second pixel mean value reaches a second preset value is received;
And the determining unit 603 is configured to determine a width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the fitting angle of the grating corresponding to the first device.
In a possible design, the determining unit 603 is specifically configured to:
determining a first position of the second device according to the fitting angle and the first position coordinate, wherein the first position is a position where the second device is located when the first pixel mean value reaches the first preset value;
determining a second position of the second device according to the fitting angle and the second position coordinate, wherein the second position is a position where the second device is located when the second pixel mean value reaches the second preset value;
and determining the width of the viewpoint according to the first position and the second position.
In a possible design, the determining unit 603 determines the first position of the second device according to the fitting angle and the first position coordinate includes:
the first position is calculated by the following formula:
X 0 ′=x 0 +(y 0 -y)*tan(a);
wherein X is 0 ' is the first position, the first position coordinates are (x 0 ,y 0 ) Y is a preset constant, and a is the fitting angle;
The determining unit 603 determining the second position of the second device according to the fitting angle and the second position coordinate includes:
the second position is calculated by the following formula:
X 1 ′=x 1 +(y 1 -y)*tan(a);
wherein X is 1 ' is the second position, the second position coordinates are (x 1 ,y 1 );
The determining unit 603 determining the width of the viewpoint according to the first position and the second position includes:
the width of the viewpoint is calculated by the following formula:
VW=abs(X 0 ′-X 1 ′);
wherein VW is the width of the viewpoint and abs is an absolute function.
In a possible design, the determining unit 603 is further configured to:
acquiring the width of the grating;
determining the arrangement layout of the view points corresponding to the first equipment according to the width of the grating and the width of the view points;
and adjusting the stereoscopic image displayed when the first device operates in the stereoscopic mode according to the arrangement layout of the viewpoints and the human eye position change of the user.
Referring to fig. 7, fig. 7 is a schematic diagram of a virtual structure of a second device according to an embodiment of the present application, where the second device 700 includes:
a shooting unit 701, configured to shoot a target image displayed by a first device in a stereoscopic mode in real time, so as to obtain a first image and a second image, where the first image and the second image are obtained by shooting the target image by the second device at different positions;
An analysis unit 702, configured to analyze the first image and the second image respectively, so as to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, where the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
a first sending unit 703, configured to send a first coordinate recording instruction to the first device if the first pixel mean value reaches a first preset value, so that the first device records a first position coordinate, where the first position coordinate is a coordinate of a position where the second device is located when the first image is captured;
and a second sending unit 704, configured to send a second coordinate recording instruction to the first device if the second pixel mean value reaches a second preset value, so that the first device records a second position coordinate, and determine a width of a view point corresponding to the first device according to the first position coordinate, the second position coordinate, and a fitting angle of a grating corresponding to the first device, where the second position coordinate is a coordinate of a position where the second device is located when the second device captures the second image.
In a possible design, the analysis unit 702 is specifically configured to:
the second device calculates the first pixel mean and the second pixel mean by the following formula:
wherein aver_pixl is the first pixel mean value or the second pixel mean value, a is the screen area, w is the width of the screen area, and h is the height of the screen area.
Next, another video width determining apparatus provided in the embodiments of the present application may be a terminal device, referring to fig. 8, where the terminal device 800 includes:
a receiver 801, a transmitter 802, a processor 803 and a memory 804 (where the number of processors 803 in the terminal device 800 may be one or more, one processor being an example in fig. 8). In some embodiments of the present application, the receiver 801, transmitter 802, processor 803, and memory 804 may be connected by a bus or other means, with the bus connection being exemplified in fig. 8.
Memory 804 may include read only memory and random access memory and provides instructions and data to the processor 803. A portion of the memory 804 may also include NVRAM. The memory 804 stores an operating system and operating instructions, executable modules or data structures, or a subset thereof, or an extended set thereof, where the operating instructions may include various operating instructions for performing various operations. The operating system may include various system programs for implementing various underlying services and handling hardware-based tasks.
The processor 803 controls the operation of the terminal device, the processor 803 may also be referred to as a CPU. In a specific application, the individual components of the terminal device are coupled together by a bus system, which may comprise, in addition to a data bus, a power bus, a control bus, a status signal bus, etc. For clarity of illustration, however, the various buses are referred to in the figures as bus systems.
The methods disclosed in the embodiments of the present application may be applied to the processor 803 or implemented by the processor 803. The processor 803 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry of hardware or instructions in software form in the processor 803. The processor 803 may be a general purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 804, and the processor 803 reads information in the memory 804, and in combination with the hardware, performs the steps of the above method.
In the embodiment of the present application, the processor 803 is configured to perform the operations performed by the first device and the second device.
The embodiment of the present application further provides a computer readable medium, which contains computer execution instructions, where the computer execution instructions can enable a server to execute the method for determining the viewpoint width described in the foregoing embodiment, and implementation principles and technical effects are similar, and are not repeated herein.
It should be further noted that the above-described apparatus embodiments are merely illustrative, and that the units described as separate units may or may not be physically separate, and that units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided by the application, the connection relation between the modules represents that the modules have communication connection therebetween, and can be specifically implemented as one or more communication buses or signal lines.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general purpose hardware, or of course may be implemented by dedicated hardware including application specific integrated circuits, dedicated CPUs, dedicated memories, dedicated components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions can be varied, such as analog circuits, digital circuits, or dedicated circuits. However, a software program implementation is a preferred embodiment in many cases for the present application. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random-access Memory (RAM, random Access Memory), a magnetic disk or an optical disk of a computer, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method for determining a viewpoint width, comprising:
the method comprises the steps that a first device displays a target image in a three-dimensional mode, so that a second device shoots the target image in real time to obtain a first image and a second image, the first image and the second image are analyzed to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, the first image and the second image are obtained by shooting the target image at different positions by the second device, the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
If the first equipment receives a first coordinate recording instruction sent by the second equipment when the first pixel mean value reaches a first preset value, the first equipment records a first position coordinate of a position where the second equipment is located when shooting the first image;
if the first device receives a second coordinate recording instruction sent by the second device when the second pixel mean value reaches a second preset value, the first device records a second position coordinate of a position where the second device is located when shooting the second image;
and the first equipment determines the width of the view point corresponding to the first equipment according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first equipment.
2. The method of claim 1, wherein the determining, by the first device, the width of the viewpoint corresponding to the first device according to the first position coordinate, the second position coordinate, and the fitting angle of the grating corresponding to the first device comprises:
the first device determines a first position of the second device according to the attaching angle and the first position coordinate, wherein the first position is a position where the second device is located when the first pixel mean value reaches the first preset value;
The first device determines a second position of the second device according to the attaching angle and the second position coordinate, wherein the second position is a position where the second device is located when the second pixel mean value reaches the second preset value;
the first device determines a width of the viewpoint according to the first position and the second position.
3. The method of claim 2, wherein the first device determining a first location of the second device based on the fit angle and the first location coordinate comprises:
the first device calculates the first location by the following formula:
X 0 ′=x 0 +(y 0 -y)*tan(a);
wherein X is 0 ' is the first position, the first position coordinates are (x 0 ,y 0 ) Y is a preset constant, and a is the fitting angle;
the first device determining the second position of the second device according to the fitting angle and the second position coordinates includes:
the first device calculates the second location by the following formula:
X 1 ′=x 1 +(y 1 -y)*tan(a);
wherein X is 1 ' is the second position, the second position coordinates are (x 1 ,y 1 );
The first device determining a width of the viewpoint according to the first location and the second location includes:
The first device calculates the width of the viewpoint by the following formula:
VW=abs(X 0 ′-X 1 ′);
wherein VW is the width of the viewpoint and abs is an absolute function.
4. A method according to any one of claims 1 to 3, further comprising:
acquiring the width of the grating;
determining the arrangement layout of the view points corresponding to the first equipment according to the width of the grating and the width of the view points;
and adjusting the stereoscopic image displayed when the first device operates in the stereoscopic mode according to the arrangement layout of the viewpoints and the human eye position change of the user.
5. A method for determining a viewpoint width, comprising:
the second device shoots a target image displayed by the first device in a three-dimensional mode in real time to obtain a first image and a second image, wherein the first image and the second image are obtained by shooting the target image at different positions by the second device;
the second device analyzes the first image and the second image respectively to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, wherein the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
If the first pixel mean value reaches a first preset value, the second device sends a first coordinate recording instruction to the first device, so that the first device records a first position coordinate, wherein the first position coordinate is the coordinate of the position of the second device when the first image is shot;
if the second pixel mean value reaches a second preset value, the second device sends a second coordinate recording instruction to the first device, so that the first device records a second position coordinate, the width of a view point corresponding to the first device is determined according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first device, and the second position coordinate is the coordinate of the position where the second device is located when shooting the second image.
6. The method of claim 5, wherein the second device analyzing the first image and the second image, respectively, to obtain a first pixel mean and a second pixel mean of a screen area corresponding to the first device comprises:
the second device calculates the first pixel mean and the second pixel mean by the following formula:
Wherein aver_pixl is the first pixel mean value or the second pixel mean value, a is the screen area, w is the width of the screen area, and h is the height of the screen area.
7. An apparatus, the apparatus being a first apparatus, the first apparatus comprising:
the display unit is used for displaying a target image in a stereoscopic mode, so that a second device shoots the target image in real time to obtain a first image and a second image, the first image and the second image are analyzed to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, the first image and the second image are obtained by shooting the target image at different positions by the second device, the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
the recording unit is used for recording a first position coordinate of a position where the second equipment is located when the second equipment shoots the first image if a first coordinate recording instruction sent by the second equipment when the first pixel mean value reaches a first preset value is received;
The recording unit is further configured to record a second position coordinate of a position where the second device is located when the second device captures the second image if a second coordinate recording instruction sent by the second device when the second pixel mean value reaches a second preset value is received;
and the determining unit is used for determining the width of the view point corresponding to the first equipment according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first equipment.
8. An apparatus, the apparatus being a second apparatus, the second apparatus comprising:
the shooting unit is used for shooting a target image displayed by a first device in a stereoscopic mode in real time to obtain a first image and a second image, wherein the first image and the second image are obtained by shooting the target image at different positions by the second device;
the analysis unit is used for respectively analyzing the first image and the second image to obtain a first pixel mean value and a second pixel mean value of a screen area corresponding to the first device, wherein the first pixel mean value corresponds to the first image, and the second pixel mean value corresponds to the second image;
The first sending unit is used for sending a first coordinate recording instruction to the first device if the first pixel mean value reaches a first preset value, so that the first device records a first position coordinate, wherein the first position coordinate is the coordinate of the position of the second device when the first image is shot;
and the second sending unit is used for sending a second coordinate recording instruction to the first equipment if the second pixel mean value reaches a second preset value, so that the first equipment records a second position coordinate, and determining the width of the view point corresponding to the first equipment according to the first position coordinate, the second position coordinate and the attaching angle of the grating corresponding to the first equipment, wherein the second position coordinate is the coordinate of the position where the second equipment is located when shooting the second image.
9. A computer device, comprising:
at least one connected processor, a memory and a transceiver, wherein the memory is adapted to store program code, the processor being adapted to invoke the program code in the memory to perform the method of determining the view width of any of the above claims 1 to 4 and claims 5 to 6.
10. A computer storage medium, comprising:
instructions which, when executed on a computer, cause the computer to perform the method of determining a viewpoint width as claimed in any one of claims 1 to 4 and claims 5 to 6.
CN202111048861.3A 2021-09-08 2021-09-08 Viewpoint width determining method and device and storage medium Active CN113763472B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111048861.3A CN113763472B (en) 2021-09-08 2021-09-08 Viewpoint width determining method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111048861.3A CN113763472B (en) 2021-09-08 2021-09-08 Viewpoint width determining method and device and storage medium

Publications (2)

Publication Number Publication Date
CN113763472A CN113763472A (en) 2021-12-07
CN113763472B true CN113763472B (en) 2024-03-29

Family

ID=78793768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111048861.3A Active CN113763472B (en) 2021-09-08 2021-09-08 Viewpoint width determining method and device and storage medium

Country Status (1)

Country Link
CN (1) CN113763472B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781560B (en) * 2021-09-08 2023-12-22 未来科技(襄阳)有限公司 Viewpoint width determining method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6608622B1 (en) * 1994-10-14 2003-08-19 Canon Kabushiki Kaisha Multi-viewpoint image processing method and apparatus
CN102186091A (en) * 2011-01-25 2011-09-14 天津大学 Grating-based video pixel arrangement method for multi-view stereoscopic mobile phone
JP2015125494A (en) * 2013-12-25 2015-07-06 日本電信電話株式会社 Image generation method, image generation device, and image generation program
WO2016032600A1 (en) * 2014-08-29 2016-03-03 Google Inc. Combination of stereo and structured-light processing
CN112925109A (en) * 2019-12-05 2021-06-08 北京芯海视界三维科技有限公司 Multi-view naked eye 3D display screen and naked eye 3D display terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5931062B2 (en) * 2011-06-21 2016-06-08 シャープ株式会社 Stereoscopic image processing apparatus, stereoscopic image processing method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6608622B1 (en) * 1994-10-14 2003-08-19 Canon Kabushiki Kaisha Multi-viewpoint image processing method and apparatus
CN102186091A (en) * 2011-01-25 2011-09-14 天津大学 Grating-based video pixel arrangement method for multi-view stereoscopic mobile phone
JP2015125494A (en) * 2013-12-25 2015-07-06 日本電信電話株式会社 Image generation method, image generation device, and image generation program
WO2016032600A1 (en) * 2014-08-29 2016-03-03 Google Inc. Combination of stereo and structured-light processing
CN112925109A (en) * 2019-12-05 2021-06-08 北京芯海视界三维科技有限公司 Multi-view naked eye 3D display screen and naked eye 3D display terminal

Also Published As

Publication number Publication date
CN113763472A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
US11615546B2 (en) Systems and methods for depth estimation using generative models
US9521362B2 (en) View rendering for the provision of virtual eye contact using special geometric constraints in combination with eye-tracking
US9898856B2 (en) Systems and methods for depth-assisted perspective distortion correction
US20110158509A1 (en) Image stitching method and apparatus
US9088772B2 (en) Image-capturing apparatus
US7643070B2 (en) Moving image generating apparatus, moving image generating method, and program
US20070024710A1 (en) Monitoring system, monitoring apparatus, monitoring method and program therefor
US10535193B2 (en) Image processing apparatus, image synthesizing apparatus, image processing system, image processing method, and storage medium
CN109785390B (en) Method and device for image correction
WO2021008205A1 (en) Image processing
CN113763472B (en) Viewpoint width determining method and device and storage medium
WO2016208404A1 (en) Device and method for processing information, and program
CN113781560B (en) Viewpoint width determining method, device and storage medium
CN203894772U (en) Mass face detecting and identifying system
US10122996B2 (en) Method for 3D multiview reconstruction by feature tracking and model registration
CN110770786A (en) Shielding detection and repair device based on camera equipment and shielding detection and repair method thereof
CN114786001B (en) 3D picture shooting method and 3D shooting system
CN111279352B (en) Three-dimensional information acquisition system through pitching exercise and camera parameter calculation method
US10282633B2 (en) Cross-asset media analysis and processing
JP2013258583A (en) Captured image display, captured image display method, and program
JP7154841B2 (en) IMAGING SYSTEM, IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD AND PROGRAM
JP2000184396A (en) Video processor, its control method and storage medium
KR20160101762A (en) The method of auto stitching and panoramic image genertation using color histogram
CN113763473B (en) Viewpoint width determining method and device and storage medium
CN110544317A (en) Image processing method, image processing device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant