CN111947894A - Measuring method, system, device and terminal equipment - Google Patents

Measuring method, system, device and terminal equipment Download PDF

Info

Publication number
CN111947894A
CN111947894A CN202010747305.4A CN202010747305A CN111947894A CN 111947894 A CN111947894 A CN 111947894A CN 202010747305 A CN202010747305 A CN 202010747305A CN 111947894 A CN111947894 A CN 111947894A
Authority
CN
China
Prior art keywords
camera
focusing
display module
virtual reality
checkerboard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010747305.4A
Other languages
Chinese (zh)
Other versions
CN111947894B (en
Inventor
朱建雄
张韦韪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huynew Technology Co ltd
Original Assignee
Shenzhen Huynew Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huynew Technology Co ltd filed Critical Shenzhen Huynew Technology Co ltd
Priority to CN202010747305.4A priority Critical patent/CN111947894B/en
Publication of CN111947894A publication Critical patent/CN111947894A/en
Application granted granted Critical
Publication of CN111947894B publication Critical patent/CN111947894B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0242Testing optical properties by measuring geometrical properties or aberrations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties

Landscapes

  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application is suitable for the technical field of virtual reality display equipment, and provides a measuring method, a system, a device and terminal equipment. According to the embodiment of the application, when the display module of the virtual reality display equipment is in a lighting state, the camera is controlled to focus the display module; when the display module is positioned at the focal position of the camera, acquiring an image picture displayed by the display module through the camera; and acquiring optical parameters of the virtual reality display equipment according to the acquired image picture. By applying the technical scheme, under the condition of not switching measurement image pictures (measurement graphic cards), the accurate measurement of a plurality of optical parameters of the virtual reality display equipment can be simply and quickly realized, and the method can be widely applied to the large-scale production and quick research and development stages of the virtual reality display equipment.

Description

Measuring method, system, device and terminal equipment
Technical Field
The present application relates to the field of Virtual Image Display (VID) device technology, and in particular, to a measurement method, system, apparatus, and terminal device for measuring optical parameters of a Virtual reality Display device.
Background
Virtual Reality display equipment can realize that Virtual Reality (VR) can bring good visual enjoyment for the user.
Optical parameters of a display module of a Virtual reality display device directly affect user experience and comfort, and include, but are not limited to, Sharpness (SFR), Virtual Frequency Response (VID), Virtual Image Distance (VID), Field of View (FOV), Distortion (Distortion), Contrast Ratio (Contrast Ratio), dispersion (CA), parallax (Disparity), angular resolution (PPD), pixel Per Degree (brightness), brightness Uniformity (brightness Uniformity), Color Uniformity (Color Uniformity), Color Temperature (CCT), Correlated Color Temperature, and the like. Any one of the optical parameters does not reach the standard, and the use of the user is seriously influenced. Therefore, in the research, development and production processes, it is one of the core works of the optical module of the virtual display device to measure whether the above parameters reach the standards.
In the prior art, a common measurement scheme needs a computer to control a display module to switch different graphic cards so as to measure corresponding optical parameters, so that the process is not only complicated, but also consumes long time, and is difficult to use and popularize in the rapid iteration and large-scale mass production process of research and development. Therefore, a measurement scheme capable of rapidly measuring different optical parameters of the virtual reality display device is needed.
Disclosure of Invention
The embodiment of the application provides a measuring method, a measuring system, a measuring device and a terminal device, which can simply and quickly realize accurate measurement of a plurality of optical parameters of virtual reality display equipment under the condition of not switching a measuring image picture (measuring graphic card).
A first aspect of an embodiment of the present application provides a measurement method, including:
when a display module of the virtual reality display equipment is in a lighting state, controlling the camera to focus on the display module; the display module is used for displaying an image picture to be detected, and the image picture comprises a checkerboard area and a pure-color circular area surrounding the checkerboard area;
when the display module is positioned at the focal position of the camera, acquiring an image picture displayed by the display module through the camera;
and acquiring optical parameters of the virtual reality display equipment according to the acquired image picture.
A second aspect of an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the measurement method according to the first aspect of the embodiments of the present application when executing the computer program.
A third aspect of an embodiment of the present application provides a measurement system, including: the terminal device comprises a camera and the terminal device according to the second aspect of the embodiment of the application, wherein the terminal device is in communication connection with the camera.
A fourth aspect of the embodiments of the present application provides a measurement apparatus, including: a processing unit and an imaging unit;
the camera shooting unit is used for focusing a display module of the virtual reality display equipment when the display module is in a lighting state; the display module is used for displaying an image picture to be detected, and the image picture comprises a checkerboard area and a pure-color circular area surrounding the checkerboard area; when the display module is positioned at the focal position of the camera shooting unit, the image picture displayed by the display module is acquired through the camera shooting unit;
and the processing unit is used for acquiring the optical parameters of the virtual reality display equipment according to the acquired image picture.
In this application embodiment, through the same image picture that virtual reality display device's display module assembly shows, can measure virtual reality display device's a plurality of optical parameters at least, like angle of vision, definition, virtual image distance and distortion etc.. Compared with the traditional mode that a plurality of image pictures (a plurality of image cards) need to be switched, the scheme of the embodiment of the invention can quickly measure a plurality of optical parameters of the virtual reality display equipment through the same image picture (the same measuring image card), thereby improving the measuring efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of a first embodiment of a measurement method provided in an embodiment of the present application;
FIG. 2-a is a schematic diagram of an image frame in a method for measuring an angle of field according to a first embodiment of the present application;
2-b is a schematic diagram of a relative positional relationship among the terminal device, the camera and the virtual reality display device provided by the embodiment of the present application;
FIG. 3-a is a schematic view of a checkerboard card of an embodiment;
FIG. 3-b is a schematic view of another embodiment of a checkerboard card;
FIG. 4 is a schematic diagram of an image frame after being subjected to a tilting process;
FIG. 5 is a schematic flow chart of a second embodiment of a measurement method provided in an embodiment of the present application;
FIG. 6 is a schematic flow chart of a third embodiment of a measurement method provided in the embodiments of the present application;
FIG. 7-a is a schematic diagram of checkerboard grid slope information;
FIG. 7-b is a schematic diagram of solving for a sharpness parameter;
FIG. 8 is a schematic illustration of image picture distortion;
fig. 9 is a schematic flow chart of a fourth embodiment of a measurement method provided in the embodiment of the present application;
fig. 10 is a schematic flow chart of a fifth embodiment of a measurement method provided in the embodiment of the present application;
fig. 11 is a schematic flowchart of a sixth embodiment of a measurement method provided in an embodiment of the present application;
FIG. 12 is a diagram of another embodiment of an image frame of a measurement method provided in an embodiment of the present application;
FIG. 13 is a diagram of another embodiment of an image frame of a measurement method provided in an embodiment of the present application;
fig. 14 is a schematic flow chart of a seventh embodiment of a measurement method provided in the embodiment of the present application;
FIG. 15 is a diagram of another embodiment of an image frame of a measurement method provided in an embodiment of the present application;
FIG. 16 is a diagram of another embodiment of an image frame of a measurement method provided in an embodiment of the present application;
fig. 17 is a schematic flow chart of an eighth embodiment of a measurement method provided in the embodiment of the present application;
fig. 18 is a schematic specific flowchart of step S1704 in the eighth embodiment of the measurement method provided in the embodiment of the present application;
fig. 19 is a schematic flowchart of a ninth embodiment of a measurement method provided in an embodiment of the present application;
fig. 20 is a schematic flowchart of a tenth embodiment of a measurement method provided in an embodiment of the present application;
FIG. 21 is a schematic structural diagram of a measurement apparatus provided in an embodiment of the present application;
fig. 22 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail. It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items. Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The optical parameter measuring method for the virtual reality display device can be applied to terminal devices such as a desktop computer, an industrial personal computer, a super-mobile personal computer (UMPC), a notebook computer, a palm computer, a tablet computer, a mobile phone, a Personal Digital Assistant (PDA), a cloud server and the like, and the terminal devices can also be special devices for realizing the measuring method. The measuring method is performed by a processor of the terminal device when running the computer program. The terminal equipment comprises a processor, and also can comprise or be externally connected with a camera, a memory, a display, an audio device, a communication module, a power supply device, a keyboard, a mouse, a remote controller and other human-computer interaction equipment. The embodiment of the present application does not set any limit to the specific type of the terminal device. In application, the camera may be a manual focusing camera for manually controlling focusing, or an automatic focusing camera with an automatic focusing function.
In Application, the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In application, the storage may be an internal storage unit of the terminal device, for example, a hard disk or a memory of the terminal device. The memory may also be an external storage device of the terminal device, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the terminal device. The memory may also include both internal and external storage units of the terminal device. The memory is used for storing an operating system, application programs, a Boot Loader (Boot Loader), data, and other programs, such as program codes of the computer programs. The memory may also be used to temporarily store data that has been output or is to be output.
As shown in fig. 1, a measurement method provided in an embodiment of the present application is used for measuring an optical parameter of a virtual reality display device, and includes:
step S101, when a display module of the virtual reality display equipment is in a lighting state, the camera is controlled to focus on the display module. The display module is used for displaying an image picture to be detected, and the image picture comprises a checkerboard area and a pure-color circular area surrounding the checkerboard area.
In an application, the virtual reality display device may be a head-mounted display device having a virtual reality display effect and configured in an arbitrary shape configuration, for example, smart glasses, a smart helmet, and the like. The display module of the virtual reality display equipment can be of a display type or a projection type. The display module assembly of demonstration type includes the display screen, and the display screen can direct display image, perhaps, shows that miniature projecting apparatus projects the image on the display screen. The projection type display module directly projects images to retinas of human eyes through a micro projector. The virtual reality display apparatus may include at least one display module, for example, one, two, or more display modules. As shown in fig. 2-a, the display module described in this embodiment displays an image to be measured, which includes a checkerboard area and a solid circular area surrounding the checkerboard area, and will not be described repeatedly in the following.
In the application, the display module of the virtual reality display equipment is lightened, and the display module of the virtual reality display equipment displays images to the camera. The image picture to be detected displayed when the display module is lightened can be preset, for example, the image picture displayed when the display module is lightened can be preset. The image screen to be measured may be an example image stored in its memory by default before the virtual reality display device is shipped from the factory. In the embodiment of the present invention, the display module can display an image frame as shown in fig. 2-a. In the image picture to be tested, the proportion of the checkerboard area to the pure color circular area can be defined by user according to actual requirements, for example, when optical parameters of 0.3 times of the full field of view need to be tested, the proportion of the checkerboard area to the pure color circular area can be set to be 0.3. It should be further noted that the checkerboard areas may include checkerboard pictures with different contrast ratios, for example, in an embodiment, the brightness contrast ratio of the checkerboard areas may be 4:1, but may also be set to other contrast ratios. The control of the camera to focus the display module means that the control of the camera to focus the image displayed by the display module so that the image picture is located at the focal position of the camera, and the camera can realize clear imaging of the image picture.
In one embodiment, step S101 includes:
when a display module of the virtual reality display equipment is in a lighting state and displays an image picture to be detected, and the camera is aligned to the central position of the display module, the camera is controlled to focus the image picture.
In application, the display module is lightened to display an image picture to be detected, so that the subsequent control of the camera to automatically focus the image picture is facilitated. By presetting the image picture as the example image which is stored in the memory of the virtual reality display device by default before the virtual reality display device leaves the factory, the additional storage of other images in the memory of the virtual reality display device in advance is not needed to be consumed manually.
It should be noted that, in the application, the display module of the virtual reality display device may be manually controlled to be in the lighting state by the tester, for example, the tester manually triggers the virtual reality display device to start up, so that the display module displays the image to be tested, and the display module is in the lighting state. The terminal equipment can be in communication connection with the virtual reality display equipment, and the display module is controlled to be in a lighting state through the communication module by the processor of the terminal equipment in a wired communication or wireless communication mode.
In using, can be fixed towards the camera through fixing device such as anchor clamps, arms, the display module assembly of virtual reality display device, then by tester's manual removal camera and the image picture that the naked eye observation is located the display module assembly display or throws in the camera field of vision, move the camera to its field of vision center and aim at the central point of display module assembly (the image picture center that display module assembly shows or throws promptly), make the camera aim at display module assembly, at this moment, manually trigger the camera by tester and carry out manual or automatic focusing to display module assembly. The display module of the virtual reality display device can be fixed towards the camera through a fixture, a mechanical arm and other fixing devices, then a two-axis or multi-axis pan-tilt camera (for example, a five-axis pan-tilt camera with an inclination angle) is adopted, the terminal device is in communication connection with the camera, the camera is controlled to move through a processor of the terminal device, the camera is controlled to continuously shoot image pictures in the visual field of the camera in the moving process of the camera, the processor acquires images shot by the camera and analyzes whether the image pictures are located in the visual field center of the camera, when the image pictures shot by the camera are located in the visual field center of the camera, the camera is controlled to stop moving, the processor determines that the camera is aligned with the display module at the moment, and then the processor controls the camera to automatically focus the display module. When the terminal equipment is provided with the camera, the processor can directly control the camera in a wired communication mode based on a cable (for example, a data bus); when the terminal equipment is externally connected with the camera, the processor controls the camera through the communication module in a wired communication or wireless communication mode.
In one embodiment, step S101 includes:
controlling the camera to capture the image picture to be detected displayed by the display module; controlling the head-mounted display equipment to move through a displacement device so that the center of the visual field of the camera is aligned with the center of the display module; judging whether an image captured by the camera is clear or not; and when the image captured by the camera is clear, determining that the display module is positioned at the focus position of the camera.
In application, the camera can be controlled by the processor of the terminal equipment to capture an image picture displayed by the display module, then whether the image picture is clear or not is judged by analyzing an image processing algorithm, and if the image picture is clear, the display module is determined to be in the focus position of the camera. The method for determining whether the image frame is clear may specifically be to obtain an mtf (modulation Transfer function) or an sfr (spatial Frequency response) value of the image frame captured by the camera, where the value is generally used to represent the definition of the image frame, and if the value reaches or approaches the maximum value, it is determined that the image frame captured by the camera is clear.
In one embodiment, when step S101 is executed, the method further includes: the virtual reality display equipment is controlled to move through the displacement device, so that the center of the visual field of the camera is aligned to the center of the display module.
In the application, no matter whether the camera is movable or not, the virtual reality display equipment can be fixed on the displacement device, and the virtual reality display equipment is controlled to move through the displacement device. Can remove and the naked eye observation is located the display module assembly display or the image picture of throwing in the camera field of vision by test personnel manual control displacement device, with the central point of display module assembly (the image center that display module assembly shows promptly) remove the position to the center of the field of vision of alignment camera, make the camera align display module assembly, carry out manual or automatic focusing by test personnel manual trigger camera to display module assembly this moment. The mobile device can be in communication connection with the terminal equipment, the mobile device is controlled to move through the processor of the terminal equipment, the camera is controlled by the processor to continuously shoot images in the visual field of the mobile device in the moving process of the mobile device, the processor obtains the images shot by the camera and analyzes whether the image pictures are located in the visual field center of the camera, when the image pictures shot by the camera are located in the visual field center of the camera, the mobile device is controlled to stop moving, the processor determines that the camera is aligned with the display module, and then the processor controls the camera to automatically focus the display module. The terminal equipment controls the mobile device through wired communication or wireless communication via the communication module. When the camera can remove and accessible displacement device control virtual reality display device removes, can control at least one between camera and the virtual reality display device and remove according to actual need to make the camera aim at display module assembly.
In application, the mobile device may be set as any device capable of moving in one-dimensional, two-dimensional or three-dimensional space according to actual needs, for example, a controllable slide rail, a biaxial displacement platform or a multiaxial displacement platform, and the multiaxial displacement platform may specifically be a five-axis displacement platform. As shown in fig. 2-b, a schematic diagram exemplarily showing a relative positional relationship among the terminal device 1, the camera 2, and the virtual reality display device 3 when the camera 2 is externally connected to the terminal device 1 is shown.
S102, when the display module is located at the focal position of the camera, acquiring an image picture displayed by the display module through the camera;
and step S103, acquiring optical parameters of the virtual reality display equipment according to the acquired image picture.
In the application, when the head-mounted display module is in the focal position of the camera, the image picture displayed by the display module acquired by the camera is clear. Therefore, the image picture displayed by the display module in the current state can be acquired through the camera, and then the optical parameters of the virtual reality display equipment are calculated according to the acquired image picture. The optical parameters of a virtual reality display device are numerous and may include, for example: one or more of sharpness, field angle, distortion, virtual image distance, etc. For example, the optical parameters of the virtual reality display device may further include: one or more of contrast, dispersion, parallax, angular resolution, brightness uniformity, color uniformity, and color temperature. In this embodiment, the image frame to be measured is displayed through the display module, and a plurality of optical parameters such as definition, field angle, distortion, virtual image distance of the virtual reality display device can be measured at least. Compared with the traditional mode that a plurality of image cards (a plurality of image pictures) need to be switched, the scheme of the embodiment of the invention can rapidly measure a plurality of optical parameters of the virtual reality display equipment through the same image picture, thereby improving the measurement efficiency.
While it has been described above that there are various optical parameters of the virtual reality display device, how to quickly measure the optical parameters of the virtual reality display device using the same image frame (the same measurement card) will be described below with reference to specific embodiments. I.e., how the specific operation is implemented when step S103 is performed.
In one embodiment, the optical parameters of the virtual reality display device include sharpness, which may be characterized using SFR (Spatial Frequency Response) values. Wherein, the checkerboard region of the image picture that virtual reality display device shows includes: the chessman grids with different contrasts are formed, and the chessman grids can be rectangular or square. Fig. 3-a shows a square grid alternating between black and white, and fig. 3-b shows a square grid with a brightness contrast of 4: 1. The size of the sub-checkerboard may be set according to the actual situation.
That is, when the definition of the virtual reality display device is measured, the method specifically includes the following steps:
when a display module of the virtual reality display equipment is in a lighting state, controlling the camera to focus the display module; the display module displays an image picture to be detected, and the image picture comprises a checkerboard area and a pure color circular area surrounding the checkerboard area. This step is the same as the operation of step S101 described above, and is not described herein again.
When the display module is positioned at the focus position of the camera, acquiring an image picture displayed by the display module through the camera; this step is the same as the operation step of step S102 described above, and is not described herein again.
And acquiring the definition of the virtual reality display equipment according to the acquired image picture. In the step, during specific operation, as the same image frame is required to be used for simultaneously measuring a plurality of optical parameters, when the definition of the virtual reality display device is measured, in order to be compatible with the measurement methods of other optical parameters, the display frame acquired by the camera needs to be tilted, so that the information of the oblique sides of the checkerboards in the checkerboard area can be acquired more accurately.
It can be understood that when the display image acquired by the camera is subjected to tilt processing, there are two methods, one is a physical tilt processing method, that is, before the image displayed by the display module is acquired by the camera, the display module is controlled to rotate by a certain angle around the emergent main optical axis of the display module. When the display module is controlled to rotate, the method can be controlled automatically or manually. After the display module rotates a certain angle relative to the camera, the image picture displayed by the display module acquired by the camera is naturally inclined. The other method is a software processing method, namely, after the camera acquires an image picture displayed by the display module, the image picture is processed through a software algorithm, so that the image picture rotates by a certain angle. Regardless of the inclination process using the above-described method, the range of the certain angle of the rotation may be [2 ° -10 ° ]. After the image frame inclination processing is performed, the resulting image frame may be as shown in fig. 4.
As shown in fig. 5, in an embodiment, when processing a display screen by a physical tilt processing method, that is, when acquiring a definition of a virtual reality display device from an acquired image screen is performed (at step S103), the method includes:
step S501, selecting a region to be detected from the checkerboard regions of the image picture, and acquiring bevel edge information of the checkerboard in the region to be detected;
step S502, oversampling is carried out on the bevel edge information to obtain an edge diffusion function curve;
and S503, sequentially carrying out derivation and Fourier transformation on the diffusion function curve to obtain the SFR value of the region to be measured.
As shown in fig. 6, in one embodiment, when the display screen is processed by a software processing method, that is, when acquiring the definition of the virtual reality display device according to the acquired image screen is executed (at step S103), the method includes:
step S600, rotating the acquired image picture by a certain angle;
step S601, after rotating the image picture, selecting a region to be tested from the checkerboard region of the image picture, and acquiring the bevel edge information of the checkerboard in the region to be tested;
step S602, performing oversampling on the bevel edge information to obtain an edge diffusion function curve;
and step S603, sequentially carrying out derivation and Fourier transformation on the diffusion function curve to obtain the SFR value of the region to be measured.
In step 600, when analyzing the image frames of the sub-checkerboard, the image frames may be rotated, then the oblique sides of the region to be detected are identified, and then the SFR sharpness calculation is performed.
If the acquired image picture is as shown in fig. 4, when steps S501-S503 or steps S601-S603 are performed, a region to be tested is first determined from the checkerboard regions of the image picture, and then the oblique side information of the sub-checkerboard is acquired from the region to be tested, where the oblique side information is as shown in fig. 7-a. That is, the oblique side is the junction between any two chessmen grids with different contrasts in the area to be measured, for example, the junction between the white grid and the black grid.
After the bevel edge information is obtained, the bevel edge information can be sequentially subjected to supersampling to obtain a more exquisite converted edge diffusion function curve, then the edge diffusion function curve is subjected to derivation to obtain the change rate of the edge diffusion function curve, and the change rate of the edge diffusion function curve is subjected to Fourier transform to obtain the SFR value of the area to be measured, wherein the processing process is shown in figure 7-b.
According to the measuring method provided by the embodiment of the invention, the definition of different areas can be measured by using the connecting edge part between any two chessman grids with different contrasts in the chessman area. Meanwhile, the image picture displayed by the display module is only required to be subjected to inclination processing, so that the image picture of the same checkerboard graphic card can be used for simultaneously measuring a plurality of optical parameters such as definition, field angle, virtual image distance and distortion of the virtual reality display equipment, the measuring method has good compatibility, the measurement of the optical parameters can be carried out more quickly without replacing the graphic card, and the testing efficiency of the virtual reality display equipment is improved.
In one embodiment, the optical parameters of the virtual reality display device specifically include distortion at full field of view. Distortion refers to the squeezing, stretching, shifting, twisting, etc. of the geometric position of the image pixels generated during the imaging process of the virtual reality display device relative to the reference system, resulting in a change in the geometric position, size, shape, orientation, etc. of the image, as shown in fig. 8. In the prior art, distortion information in the full field of view cannot be obtained. In the embodiment of the application, the distortion of the virtual reality display device in the full field of view can be obtained through the obtained image.
Namely, when measuring distortion in the full field of view of the virtual reality display device, the method specifically includes the following steps:
when a display module of the virtual reality display equipment is in a lighting state, controlling the camera to focus the display module; the display module displays an image picture to be detected, and the image picture comprises a checkerboard area and a pure color circular area surrounding the checkerboard area. This step is the same as the operation of step S101 described above, and is not described herein again.
When the display module is positioned at the focus position of the camera, acquiring an image picture displayed by the display module through the camera; this step is the same as the operation step of step S102 described above, and is not described herein again.
And acquiring distortion of the virtual reality display equipment according to the acquired image picture. In the specific operation of the step, the checkerboard area comprises a plurality of sub-checkerboards which form a matrix with a plurality of rows and columns. Therefore, when the distortion of the virtual reality display device under the full field of view is acquired by using the acquired image picture (in step S103), the coordinate information of the corner points on the outermost periphery line and the column of the checkerboard region needs to be acquired, and the distortion of the virtual reality display device under the full field of view can be calculated according to the coordinate information of the corner points on the outermost periphery line and the column. As shown in fig. 9, the method may specifically include:
step S901, determining coordinate information of six corner points on the outermost peripheral line and the column of the checkerboard area based on the acquired image;
step S902, substituting the acquired six corner point information into formula one:
Figure RE-GDA0002687876270000101
obtaining TV distortion under the full field of view; or the like, or, alternatively,
step S903, substituting the acquired coordinate information of the six corner points into a formula II:
Figure RE-GDA0002687876270000102
resulting in a distortion of the SMIA TV in full field of view.
Wherein, AB and EF represent the vertical size of corner points, and CD represents the vertical size of the corner points in the edges of the head line and the tail line.
As is known, a checkerboard area may comprise a plurality of alternating sub-checkerboards, thereby forming rows and columns of array corners. To obtain distortion under the full field of view, the coordinate information of six corner points on the outermost peripheral row and column of the checkerboard region needs to be calculated. The six corner points are specifically: the angular points of four corners of the checkerboard area and the angular points of the middle points of the two lines of edges at the head and the tail. However, due to the edge contrast, the coordinate information of the six corner points cannot be directly analyzed according to the acquired image of the checkerboard area. Therefore, the coordinate information of the six corner points needs to be calculated by other methods. As shown in fig. 10, an embodiment shows a method for calculating coordinate information of the six corner points, which specifically includes:
step S1001, acquiring coordinate information of each identifiable corner point on a checkerboard area based on the acquired image picture;
in this step, when coordinate information of each identifiable corner point on the checkerboard region in the image is acquired, any corner point coordinate on the middle row and column can be directly identified except that the coordinate information cannot be determined because the corner point coordinates on the outermost peripheral row and column cannot be accurately distinguished from the background color. That is, the coordinates of the corner points on the remaining rows and columns can be identified except that the coordinates of the corner points on the 1 st row, the 1 st column, the nth row and the mth column can not be identified. That is, for a checkerboard with n rows and m columns, the corner coordinates of the remaining rows and columns can be identified except that the corner coordinates of the 1 st row, the 1 st column, the n th row and the m th column cannot be identified. Wherein n is the number of lines in the checkerboard area; m is the number of columns in the checkerboard area.
Step S1002, acquiring side length information of the sub-checkerboards based on coordinate information of each identifiable corner point on the checkerboard area;
when the method is specifically operated, the implementation steps comprise:
selecting one corner point from the identifiable corner points as a target corner point; when selecting the target corner point, any recognizable corner point can be selected as the target corner point. For example, in one embodiment, the corner point in the second row and the second column in the checkerboard area is used as the target corner point, or the corner point in the (n-1) th row and the (m-1) th column in the checkerboard area is used as the target corner point.
Acquiring first coordinate information of a target corner point; and additionally acquiring second coordinate information of at least one corner point on the line where the target corner point is located and third coordinate information of at least one corner point on the column where the target corner point is located, so as to determine the side length information of the sub-checkerboard according to the relative position relationship between the second coordinate information, the third coordinate information and the first coordinate information (for example, according to the number information and the coordinate information of each corner point). The coordinates of the corner points are pixel coordinates of the corner points, so the coordinates of the corner points are determined according to the resolution of the camera.
And finally, determining the side length of the sub-checkerboard according to the relative position relationship among the first coordinate information, the second coordinate information and the third coordinate information of the target corner point.
For example, the length and width of the sub-checkerboard may be determined according to the relative position relationship between the first coordinate information and the second coordinate information and the third coordinate information of the target corner point. For example, the coordinates of the three corner points are (a, b), (c, b), and (a, d), the first side length information m of the checkerboard is (c-a)/K1, and the second side length information n of the checkerboard is (d-b)/K2. Wherein, K1, K2 are used for identifying the relative position relation or number information between three corner points. The values of m and n may be the same or different, the values of m and n are the same when the checkerboard grid is square, and the values of m and n are different when the checkerboard grid is rectangular.
Step S1003, obtaining coordinate information of six corner points on the outermost girth row and the outermost row in the checkerboard area according to the side length information of the checkerboard and the coordinate information of each recognizable corner point on the checkerboard area.
After the side length information of the sub-checkerboards is obtained, in order to obtain the coordinate information of the six corner points on the outermost go-sub-checkerboard in the checkerboard card, it is necessary to additionally obtain the coordinate information of the corner points on the outermost go-sub-checkerboard in the checkerboard card, which are associated with the six corner points. Wherein, the coordinate information of the corner points associated with the six corner points may be: coordinate information of corner points belonging to the same row, the same column or a diagonal line as the six corner points. It will be appreciated that the coordinate information of the corner points associated with the six corner points may comprise or may be calculated from the coordinate information of the identifiable corner points described above.
After the coordinate information of the six corner points is calculated, step S902 or step S903 may be executed. In calculating the distortion of the virtual reality display apparatus, the TV distortion may be calculated, i.e., step S902; or calculating the SMIA TV distortion, step S903. The TV distortion is a distortion measurement scheme widely used in the traditional optical industry (such as a camera), the SMIA TV distortion is an emerging measurement standard commonly used in international standards in recent years, and the measurement mode is more reasonable and accurate. Ideally the SMIA TV distortion is equal to 2 times the TV distortion.
According to the measuring method provided by the embodiment of the invention, coordinate information of six corner points on the outermost peripheral line and the column of the checkerboard area is obtained through algorithm fitting, and the distortion measurement under the full-view field can be realized. Meanwhile, the image picture of the same checkerboard graphic card can be used for simultaneously measuring a plurality of optical parameters such as definition, field angle and virtual image distance of the virtual reality display device, the measuring method has good compatibility, the measurement of the optical parameters can be carried out more quickly without replacing the graphic card, and the testing efficiency of the virtual reality display device can be greatly improved.
In an embodiment of the present invention, the optical parameters of the virtual reality display device may specifically include a field angle, and the existing method for determining the field angle is to display a rectangular pure color chart card through the display device, capture a chart card area displayed by the display device through a camera with a known field angle, and then calculate the field angle of the display device according to the size of an image captured by the camera and the size of the pure color chart card area in the image captured by the camera by using a trigonometric function or a ratio relation. The field angles of common display devices include: a diagonal field of view, a transverse field of view, and a longitudinal field of view.
The method can conveniently measure the field angle of the display equipment. However, the above method is only suitable for a display module (generally, an AR display device) which has a small viewing angle and displays a rectangular screen, and cannot be directly used for a virtual reality display device (VR display device) which has a large viewing angle and displays a circular screen and is generally available on the market today. Accordingly, there is a need in the art for a method of measuring the field angle of a virtual reality display device.
In an embodiment of the present invention, a method for measuring a field angle of a virtual reality display device is provided, which specifically includes the following steps:
when a display module of the virtual reality display equipment is in a lighting state, controlling the camera to focus the display module; the display module displays an image picture to be detected, and the image picture comprises a checkerboard area and a pure color circular area surrounding the checkerboard area. This step is the same as the operation of step S101 described above, and is not described herein again.
When the display module is positioned at the focus position of the camera, acquiring an image picture displayed by the display module through the camera; this step is the same as the operation step of step S102 described above, and is not described herein again. And the number of the first and second groups,
and acquiring the field angle of the virtual reality display equipment according to the acquired image picture. This step may specifically include, in specific operations, as shown in fig. 11, that is, when acquiring a field angle of the virtual reality display device according to the acquired image screen is executed (step S103):
step S1101 of determining the diameter of the solid circular area based on the acquired image picture;
fig. 12 is a schematic diagram of an image frame acquired by the camera. The image picture comprises a background area (a black area in fig. 12) and a pure color circular area (a white area in fig. 12), both the background area and the pure color circular area are pure color areas, the background area is arranged around the pure color circular area, and the two areas have obvious large contrast difference, which requires that the background area and the pure color circular area of the image picture have obvious contrast. The purpose is as follows: when image recognition is carried out subsequently, the circular area and the background area can be distinguished through an algorithm.
Step S1102, determining a field angle of the virtual reality display device based on the diameter of the solid color circular area, the size of the image frame, and a preset camera field angle corresponding to the size of the image frame.
The field angle of the camera is known, and specifically, the preset camera field angle includes: the camera has a diagonal angle of view, a transverse angle of view and a longitudinal angle of view. However, since the virtual reality display device displays a circular image, the diagonal field of view, the camera transverse field of view, and the camera longitudinal field of view are not distinguished.
And determining the diameter of the pure color circular area based on the acquired image picture. Since the background region and the circular region are both solid color regions and have significant contrast, the circular region can be identified from the image picture, resulting in the diameter of the circular region.
In identifying the diameter of a circular area, there are two methods, one is: identifying the circular area according to the contrast between the background area and the circular area, and further obtaining the diameter of the circular area; the other is that: and acquiring the diameter of the circular area by a method of traversing the image picture. Hereinafter, the above two methods will be described separately.
The first method comprises the following steps: according to the contrast between the background area and the circular area, a pure color circular area is directly identified by a software algorithm, and then the diameter AB of the circular area is calculated. Because the circle is not sensitive to direction, the diameter of the circular area may be transverse, longitudinal, or other direction through the center of the circle, as shown in FIG. 13.
The second method comprises the following steps: the diameter of the circular area is obtained by traversing the image frame, and the specific operation method can be as shown in fig. 14. As shown in fig. 14, a schematic flow chart of a method for obtaining a diameter of a circular region by a method of traversing an image frame specifically includes:
s1401, traverse the picture of the picture from any direction with the straight line form, on any straight line, get at least one pixel point of pixel value one of the label background area and/or pixel point of pixel value two of the label circular area.
S1402, determining a straight line which identifies the maximum pixel point of the second pixel value of the circular area.
And S1403, in a straight line which identifies the maximum pixel points of the second pixel value of the circular area, connecting lines of the pixel points of the second pixel value are used as the diameter of the circular area.
It is known that an image frame comprises many pixels. For an image picture comprising a background region and a circular region, both of which are solid color regions, it is actually an image of binary pixel values, i.e. the image pixels of the image picture comprise binary pixels. For example, a pixel of a background region may be identified by a pixel value of one (e.g., 0), and a pixel of a circular region may be identified by a pixel value of two (e.g., 1). Therefore, in step S1401, the image screen can be traversed in a linear form from either direction. On any straight line, a pixel point of a first pixel value of the marked background region may be obtained (at this time, the straight line only belongs to the background region), a pixel point of a second pixel value of the marked circular region may be obtained (at this time, the straight line only belongs to the circular region), or one or more pixel points of the first pixel value of the marked background region and pixel points of the second pixel value of the marked circular region may be obtained (at this time, the straight line spans the circular region and the background region).
After step S1401 is performed, a plurality of straight lines can be obtained simultaneously across the circular region and the background region, as shown in fig. 15. Among the plurality of straight lines, there is a straight line K, and the number of pixels belonging to the circular area is larger than that of the other straight lines, so that the straight line K needs to be found in step S1402. Because the number of pixels of the straight line K belonging to the circular area is more than that of other straight lines, based on the circular characteristic, the straight line K can be known to penetrate through the circle center of the circular area. Therefore, in step S1403, the diameter AB of the circular region can be obtained by connecting the connecting lines of the pixel points of all the pixel values two in the straight line.
After the diameter of the circular area is obtained according to the first manner or the second manner, step S1102 may be executed. That is, the angle of view of the virtual reality display device is determined based on the diameter of the solid circular area, the size of the image screen, and a preset camera angle of view corresponding to the size of the image screen.
Wherein the size of the image picture, the preset camera view angle corresponding to the size of the image picture is pre-stored in the device or can be acquired. Wherein, camera angle of vision also includes: a diagonal angle of view, a lateral angle of view, and a longitudinal angle of view. When the size of the image picture is a diagonal size, the preset camera view angle corresponding to the size is a diagonal view angle, and when the size of the image picture is a transverse size, the preset camera view angle corresponding to the size is a transverse view angle; when the size of the image picture is the longitudinal size, the preset camera view angle corresponding to the size is specifically the longitudinal view angle.
The device needs to acquire the size of an image picture, and after the size of the image picture is acquired, the field angle of the virtual reality display device can be determined according to the acquired size of the image picture, a preset camera field angle corresponding to the size of the image picture and the diameter of the circular area.
Further, in order to accurately calculate the angle of view of the virtual reality display device, there may be two methods, that is, step S1102 is specifically executed, which may include S11021 or S11022, and S11021 or S11022 is specifically as follows:
s11021: calculating the field angle of the virtual reality display equipment according to a formula III based on the diameter of the circular area, the size of the image picture and a preset camera field angle corresponding to the size of the image picture; wherein the third formula is:
wherein, PVRIs the diameter of a circular area, FVRFor virtual reality display devicescamIs the size of the picture of the image, FcamThe camera angle is preset and corresponds to the size of the image picture.
In the present embodiment, the ratio of the diameter of the circular area to the angle of view of the virtual reality display device, the size of the image screen, and the ratio of the angle of view of the camera corresponding to the size are equal. Therefore, the field angle of the virtual reality display device can be obtained through a simple ratio relation, and particularly, the calculation can be carried out through a formula III. For example, as shown in FIG. 16, PABDiameter of a solid color circular area, PcamFor the size of the image frame, P isABAnd PcamAnd substituting the three formulas to obtain the field angle of the virtual reality display equipment.
S11022: calculating the field angle of the virtual reality display equipment according to a formula IV based on the diameter of the circular area and the ratio of the size of the image picture to the preset field angle of the camera corresponding to the size; wherein, the formula four is:
Figure RE-GDA0002687876270000152
in the present embodiment, the first ratio of the diameter of the circular area to the tangent function of the angle of field of view of the virtual reality display device, the second ratio of the size of the image screen, and the tangent function of the angle of field of view of the camera are equal. Therefore, the field angle of the virtual reality display device can be obtained by the trigonometric function, and can be specifically obtained by calculating the formula four in which P isVRIs the diameter of a circular area, FVRIs the angle of view, P, of the virtual reality display devicecamIs the size of the picture of the image, FcamThe preset camera view angle corresponding to the size. For example, as shown in FIG. 16, PABIs the diameter of the circular area, PcamFor the size of the image frame, P isABAnd PcamAnd substituting the formula IV to obtain the angle of field of the virtual reality display equipment.
According to the method for testing the field angle, the field angle of the virtual reality display device can be measured through the graphic card comprising the circular area, and the problem that the field angle of the virtual reality display device cannot be measured through the method for measuring the field angle in the prior art is solved. And the important optical parameters of most display equipment can be measured by one graphic card, the data measurement process is simplified, and the data measurement efficiency is improved.
In an embodiment of the present invention, the optical parameters of the virtual reality display device specifically include: the virtual image distance. The virtual image distance refers to a distance from a virtual image plane formed by the virtual reality display device to an exit pupil (pupil of the human eye). Because measure the object space distance that the virtual image distance need rely on the camera, when display module assembly is in the focus position of camera, the method still includes: and acquiring a first focus value of the camera. Next, the present embodiment will be described in detail by fig. 17.
Fig. 17 is a schematic flow diagram of another embodiment of the present invention, including: steps S1701 to S1703, which are the same as steps S101 to S103, further include:
step S1704, the object space distance of the camera is obtained;
and step 1705, taking the acquired object space distance of the camera as a virtual image distance of the virtual reality display device.
The object space focus of camera is located the image space focal plane of wear-type display module assembly, coincides with the image space focus of wear-type display module assembly, and the object space distance of camera can regard as virtual image distance (being the image space distance) of virtual reality display device this moment. The object space distance of the camera can be obtained according to the corresponding relation between the focusing numerical value of the camera and the object space distance which are measured and recorded in advance.
In using, when virtual reality display device includes two display module assemblies, can repeatedly carry out above-mentioned step control camera and focus to one of them display module assembly, when wherein the display module assembly is in the focus position of camera, acquire the object space distance of camera to obtain the virtual image distance of one of them display module assembly, after the completion obtains the virtual image distance of one of them display module assembly, continue to obtain the virtual image distance of another display module assembly according to the same method.
In the step of obtaining the object distance of the camera, there are various methods, for example, as shown in fig. 18, in one embodiment, the camera is an auto-focus camera; step S1704 includes:
step S1801, when the display module is at the focal position of the camera, obtain a first focus value of the camera.
In application, when the camera finishes automatic focusing operation on an image picture displayed or projected by the display module, the center of the image picture is positioned at the focal position of the camera, and at the moment, the processor of the terminal equipment acquires the focusing numerical value of the camera. In this embodiment, the focus value obtained when the display module is located at the focal position of the camera is defined as a first focus value, so as to be distinguished from a second focus value obtained when the focus card is located at the focal position of the camera.
Step S1802, acquiring an object space distance corresponding to a first focus value according to the first focus value and a pre-stored focus index table; the focusing index table is used for recording the corresponding relation between the object space distance of K groups of different cameras and a second focusing numerical value, and K is more than 1 and is an integer;
in step S1803, the object distance corresponding to the first focus value is used as the object distance of the camera.
In application, the focusing index table is a data table formed by recording the corresponding relationship between the object distance and the second focusing numerical value of K groups of different cameras acquired in advance through K groups of data measurement operations. K is an integer larger than 1, and the larger the numerical value of K is, the more accurate the virtual image distance obtained according to the focusing index table is.
In application, the focus index table may be a look-up table (LUT), or may be another data table or a Random Access Memory (RAM) type storage medium having the same input data function, that is, a function of searching corresponding output data according to the input data.
In application, when a second focusing numerical value equal to the first focusing numerical value is recorded in the focusing index table, the object space distance corresponding to the first focusing numerical value can be found in the focusing index table and is used as the object space distance of the camera when the display module is located at the focus position of the camera, so that the virtual image distance of the display module is obtained; when a second focusing numerical value equal to the first focusing numerical value is not recorded in the focusing index table, the object space distance corresponding to the first focusing numerical value cannot be found in the focusing index table, and the object space distance corresponding to the second focusing numerical value close to the first focusing numerical value can be used as the object space distance of the camera when the display module is located at the focus position of the camera, so that the virtual image distance of the display module is obtained.
In one embodiment, step S1802 includes:
according to the first focusing value, searching a third focusing value with the minimum absolute difference value with the first focusing value in a focusing index table;
and acquiring the object space distance corresponding to the found third focusing numerical value in the focusing index table as the object space distance corresponding to the first focusing numerical value.
In application, since the focusing index table does not necessarily record the second focusing numerical value equal to the first focusing numerical value, in order to ensure that the corresponding object distance can be found in the pre-stored index table according to the first focusing numerical value, the corresponding object distance can be found in the focusing index table according to the third focusing numerical value with the smallest difference value with the first focusing numerical value in the focusing index table. The third focus value is a second focus value having the smallest absolute difference value from the first focus value among all the second focus values recorded in the focus index table. Absolute difference is defined as the absolute value of the difference between two values.
As shown in fig. 19, in one embodiment, on the basis of the embodiment corresponding to fig. 18, before step S1701, the following steps for forming a focus index table are included:
step 1901, acquiring a corresponding relation between object space distances of K groups of different cameras and a second focus value;
step S1902, recording a correspondence between object distances of K different groups of cameras and second focus values, and forming a focus index table.
In application, before the step of controlling the camera to focus the display module, when a certain spacing distance is reserved between the focusing chart card and the camera through the camera in advance, the center position of the focusing chart card is aligned and automatic focusing is carried out, a second focusing numerical value of the camera at the moment is obtained, the spacing distance between the focusing chart card and the camera is used as an object space distance of the camera, and a corresponding relation between the object space distance and the second focusing numerical value is established; then changing the spacing distance between the cameras and the focusing picture card to obtain the corresponding relation between the object space distance of the next group of cameras and the second focusing numerical value; and repeating the steps in a circulating way until a sufficient number of corresponding relations between the object space distances and the second focusing numerical values are obtained, and recording and forming a focusing index table.
As shown in fig. 20, in one embodiment, step S1901 includes the steps of:
step S2001, when the focusing card is located at any position within the imaging range of the camera, acquiring a separation distance between the camera and the focusing card.
In application, the distance between the camera and the focusing graphic card can be measured by a distance measuring tool such as an infrared distance measuring instrument, a laser distance measuring instrument or an electronic ruler controlled by a user or terminal equipment, and can also be manually measured by the user through the ruler.
And step S2002, keeping the spacing distance between the camera and the focusing graphic card unchanged, and controlling the camera to automatically focus the focusing graphic card.
In application, the side of the focusing image card facing the camera is provided with a non-pure color image comprising at least two image elements, so as to facilitate the camera to carry out automatic focusing. Specifically, the focusing chart may be a non-solid image with a central point and a large contrast, such as a black-and-white image with a symmetrical cross arrow. The object space distance between the camera and the focusing image card can be set to any distance according to actual needs.
In the application, can fix the picture card of focusing through the displacement device, make the image orientation camera of the picture card of focusing, the displacement device can be from taking range finding function or the visual distance scale of people's eye, then move by tester manual control displacement device, set up the interval distance between the picture of camera and the picture card of focusing into known object space distance, under the circumstances of guaranteeing that the two object space distances are unchangeable, the manual camera that finely tunes, and the naked eye observes the image of the picture card of focusing that is located the camera field of vision, aim at the image center of the picture card of focusing with the field of vision center of camera, make the camera aim at the picture card of focusing, at this moment, manually trigger the camera by tester and carry out auto focus to the display module. Or the focusing picture card can be fixed by the displacement device to enable the image of the focusing picture card to face the camera, the two-axis or multi-axis pan-tilt camera is adopted to enable the terminal equipment to be in communication connection with the displacement device and the camera, controlling at least one of the displacement device and the camera to move through a processor of the terminal equipment, setting the distance between the camera and the image of the focusing graph card as a known object distance, under the condition of ensuring that the distance between the object space and the camera is not changed, the processor finely adjusts the camera and controls the camera to continuously shoot the image positioned in the visual field of the camera, the processor acquires the image shot by the camera and analyzes whether the image is positioned at the center of the visual field of the camera or not, when the image shot by the camera is positioned in the center of the visual field of the camera, the camera is controlled to stop moving, the processor determines that the camera aligns to the focusing graph card at the moment, and then the processor controls the camera to automatically focus the focusing graph card.
And step 2003, acquiring a second focusing numerical value of the camera when the focusing picture card is at the focus position of the camera.
In application, when the camera finishes automatic focusing operation on the image of the focusing image card, the image center is located at the focus position of the camera, and at the moment, the processor of the terminal equipment acquires the focusing numerical value of the camera to obtain a second focusing numerical value.
And step S2004, taking the spacing distance between the camera and the focusing graph card as the object space distance of the camera, and establishing the corresponding relation between the object space distance of the camera and the second focusing numerical value.
In an application, the correspondence between the object distance and the second focus value may be a mapping relationship.
And step S2005, changing the spacing distance between the camera and the focusing picture card, and then returning to execute the step S2001 until the corresponding relation between the object space distance and the second focusing value of K groups of different cameras is obtained.
In application, after the corresponding relationship between one group of object space distances and the second focusing numerical value is obtained, the object space distance between the camera and the focusing image card can be changed, and then the process returns to the step S2001, the steps S2001 to S2003 are repeatedly executed to obtain the corresponding relationship between the next group of object space distances and the second focusing numerical value, and the process is repeated in a circulating manner until the corresponding relationship between the K groups of different object space distances and the second focusing numerical value is obtained.
In application, a focusing index table may be pre-established, and the focusing index table is written every time a corresponding relationship between a group of object distances and a second focusing value is obtained. Or after the corresponding relation between the K groups of different object space distances and the second focusing numerical value is obtained, a focusing index table is established, and the K groups of corresponding relations are written into the focusing index table in batches.
As shown in the following table, an example of a focus index table is shown, in which K different focus values N are recorded1、N2、…、 Nk(i.e., second focus value) from K different object distances L1、L2、…、LkThe corresponding relation between them.
Focusing value Distance of object space
N1 L1
N2 L2
Nk Lk
According to the method provided by the application, when the display module of the virtual reality display equipment is lightened and the camera is aligned with the display module, the camera is controlled to automatically focus the display module; when the display module is positioned at the focus of the camera, acquiring a first focus value of the camera; according to first focus numerical value and the index table of prestoring that is used for recording the corresponding relation between K group's different object space distance and the second focus numerical value, acquire the object space distance that corresponds with first focus numerical value and show as the virtual image distance of display module assembly, realization that can be simple quick is to the accurate measurement and with low costs of virtual image distance of virtual reality display device, but wide application in virtual reality display device's large-scale production and quick research and development stage.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
As shown in fig. 21, the embodiment of the present application further provides a measurement apparatus 100, configured to perform the method steps in the foregoing method embodiments. The measuring apparatus 100 may be a camera apparatus integrated with a camera and a processor, or may be a virtual application in the processor of the terminal device. Measurement apparatus 100, comprising:
a processing unit 101 and an imaging unit 102;
the camera unit 102 is used for focusing a display module of the virtual reality display device when the display module is in a lighting state; the display module displays an image picture to be detected, and the image picture comprises a checkerboard area and a pure-color circular area surrounding the checkerboard area. And the image acquisition unit 102 is used for acquiring an image picture displayed by the display module when the display module is at the focal position of the camera unit 102.
The processing unit 101 is configured to acquire an optical parameter of the virtual reality display device according to the acquired image frame; the optical parameters may include: one or more of sharpness, field angle, distortion, virtual image distance, etc. For example, it may further include: one or more of contrast, dispersion, parallax, angular resolution, brightness uniformity, and color uniformity. In application, the camera unit 102 and the processing unit 101 are software program units in a processor.
For example, when the optical parameter includes the definition, the processing unit 101 is specifically configured to obtain the definition of the virtual reality display device according to the obtained image frame; in an embodiment, sharpness is identified by the SFR value;
before the image picture displayed by the display module is acquired through the camera unit 102, the method further includes: the processing unit 101 controls the display module to rotate a certain angle around the emergent main optical axis of the display module; the processing unit 101 obtains the definition of the virtual reality display device according to the obtained image frame, and specifically may include: selecting a region to be detected from the checkerboard regions of the image picture, and acquiring bevel edge information of the checkerboard in the region to be detected; performing supersampling on the bevel edge information to obtain an edge diffusion function curve; conducting derivation and Fourier transformation on the diffusion function curve in sequence to obtain an SFR value of the area to be measured; or, the processing unit 101 rotates the acquired image frame by a certain angle; after the image picture is rotated, selecting a region to be detected from the checkerboard regions of the image picture, and obtaining bevel edge information of the checkerboard in the region to be detected; performing supersampling on the bevel edge information to obtain an edge diffusion function straight line; and sequentially carrying out derivation and Fourier transformation on the diffusion function straight line to obtain an SFR value of the region to be detected of the image picture. Wherein the range of the certain angle is [ 2-10 degrees ].
For example, when the optical parameter includes distortion under a full field of view, the processing unit 101 is specifically configured to obtain distortion under the full field of view of the virtual reality display device according to the obtained image frame; in an embodiment, the checkerboard area includes a plurality of sub-checkerboards forming a plurality of rows and a plurality of columns; when the processing unit 101 acquires distortion in the full field of view of the virtual reality display device according to the acquired image picture, the method includes:
determining coordinate information of six corner points on the outermost periphery line and the column of the checkerboard area based on the acquired image; the six corner points include: angular points of four corners and angular points of the middle points of the edges of the head and the tail rows; and substituting the acquired six corner point information into a formula
Figure RE-GDA0002687876270000202
Obtaining TV distortion under the full field of view; or substituting the acquired coordinate information of the six corner points into a formula
Figure RE-GDA0002687876270000201
Obtaining SMIA TV distortion under the full view field; wherein, AB and EF represent the vertical size of corner points, and CD represents the vertical size of the corner points in the edges of the head line and the tail line.
It should be noted that, determining coordinate information of six corner points on the outermost peripheral row and column of the checkerboard area based on the acquired image picture includes: acquiring coordinate information of each identifiable corner point on the checkerboard area based on the acquired image picture; acquiring side length information of the sub-checkerboards based on coordinate information of each identifiable corner point on the checkerboard area; and finally, obtaining coordinate information of six corner points on the outermost peripheral line and the column in the checkerboard region according to the side length information of the sub-checkerboard and the coordinate information of each recognizable corner point on the checkerboard region.
Acquiring side length information of the sub-checkerboards based on coordinate information of each identifiable corner point on the checkerboard area, wherein the side length information comprises the following steps:
selecting one corner point from the identifiable corner points as the target corner point; acquiring first coordinate information of a target corner point, and additionally acquiring second coordinate information of at least one corner point on a line where the target corner point is located and third coordinate information of at least one corner point on a column where the target corner point is located; and determining the side length of the sub-checkerboard according to the relative position relationship among the first coordinate information, the second coordinate information and the third coordinate information of the target corner point.
For example, when the optical parameter includes a field angle, the processing unit 101 is specifically configured to, according to the acquired image frame, acquire the field angle of the virtual reality display device, and specifically includes:
determining the diameter of the pure color circular area based on the image picture; and determining the field angle of the virtual reality display device based on the diameter of the pure color circular area, the size of the image picture and a preset camera field angle corresponding to the size of the image picture.
In determining the diameter of a circular area, there are two methods, one is: identifying the diameter of the circular area according to the contrast between the background area and the circular area; the other is that: and acquiring the diameter of the circular area by a method of traversing the image picture. Hereinafter, the above two methods will be described separately.
The first method comprises the following steps: according to the contrast between the background area and the circular area, the pure color circular area is directly identified by a software algorithm, and then the diameter of the circular area is calculated. The second method comprises the following steps: the method for acquiring the diameter of the circular area by traversing the image picture specifically comprises the following steps: and traversing the image picture from any direction in a straight line mode, and obtaining at least one pixel point for identifying a first pixel value of the background area and/or at least one pixel point for identifying a second pixel value of the circular area on any straight line. And determining a straight line which identifies the maximum pixel points of the second pixel value of the circular area. And finally, in a straight line which identifies the maximum pixel points of the second pixel value of the circular area, connecting lines of the pixel points of the second pixel value are used as the diameter of the circular area.
And determining the field angle of the virtual reality display device based on the diameter of the pure color circular area, the size of the image picture and the preset camera field angle corresponding to the size of the image picture, wherein the method comprises the following steps:
based on the diameter of the pure color circular area, the size of the image picture and the preset camera view angle corresponding to the size of the image picture according to a formula
Figure RE-GDA0002687876270000211
Calculating the field angle of the virtual reality display equipment; or, based on the diameter of the pure color circular area, the size of the image picture and the preset camera view angle corresponding to the size of the image picture, according to a formula
Figure RE-GDA0002687876270000212
And calculating the angle of field of the virtual reality display equipment. Wherein, PVRDiameter of a solid colored circular area, FVRFor virtual reality display devicescamIs the size of the picture of the image, FcamThe preset camera view angle corresponding to the size of the image picture is set.
For example, when the optical parameter includes a virtual image distance, the processing unit 101 is specifically configured to, when acquiring the virtual image distance of the virtual reality display device according to the acquired image frame, specifically include: acquiring the object space distance of a camera; and taking the acquired object space distance of the camera as a virtual image distance of the virtual reality display equipment.
In summary, the scheme of the embodiment of the invention can rapidly measure a plurality of optical parameters of the virtual reality display device through the same image frame, thereby improving the measurement efficiency.
As shown in fig. 22, an embodiment of the present application also provides a terminal device 10 including: at least one processor 11 (only one shown in fig. 22), a memory 12, and a computer program 13 stored in the memory 12 and executable on the at least one processor 11, the steps in any of the various measurement method embodiments described above being implemented when the computer program 13 is executed by the processor 11.
In application, the terminal device may be a desktop computer, an industrial personal computer, a super mobile personal computer, a notebook computer, a palm computer, a tablet computer, a mobile phone, a personal digital assistant, a cloud server, and the like. The terminal device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that fig. 22 is merely an example of a terminal device, and does not constitute a limitation of the terminal device, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, etc.
An embodiment of the present application further provides a measurement system, including: camera and foretell terminal equipment, terminal equipment with camera communication connection. In one embodiment, the measurement system further comprises: the displacement device is in communication connection with the terminal device, is used for fixing the head-mounted display device and changing the position of the focusing graphic card, and is a controllable slide rail, a double-shaft displacement platform or a multi-shaft displacement platform.
It should be noted that, because the above-mentioned information interaction, execution process, and other contents between the device, the unit, and the system are based on the same concept, specific functions and technical effects thereof may be referred to specifically in the method embodiment section, and are not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program can implement the steps in the above-mentioned measurement method embodiments.
The embodiment of the present application provides a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above-mentioned measurement method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (25)

1. A measurement method for measuring optical parameters of a virtual reality display device, comprising:
when a display module of the virtual reality display equipment is in a lighting state, controlling the camera to focus on the display module; the display module is used for displaying an image picture to be detected, and the image picture comprises a checkerboard area and a pure-color circular area surrounding the checkerboard area;
when the display module is positioned at the focal position of the camera, acquiring an image picture displayed by the display module through the camera;
and acquiring optical parameters of the virtual reality display equipment according to the acquired image picture.
2. The measurement method of claim 1, further comprising: and self-defining the proportion of the checkerboard area to the pure color circular area.
3. The measurement method of claim 2, wherein the checkerboard areas comprise checkerboard pictures of different contrasts.
4. The measurement method of claim 3, wherein the brightness contrast of the checkerboard in the checkerboard area is 4: 1.
5. The measurement method of any one of claims 1 to 4, the optical parameter comprising sharpness and being identified by SFR values; the checkerboard area comprises a plurality of sub checkerboards;
before the image picture displayed by the display module is acquired through the camera, the method further comprises the following steps: controlling the display module to rotate a certain angle around an emergent main optical axis of the display module; the acquiring optical parameters of the virtual reality display device according to the acquired image picture specifically includes: selecting a region to be detected from the checkerboard regions of the image picture, and acquiring bevel edge information of a checkerboard in the region to be detected; performing supersampling on the bevel edge information to obtain an edge diffusion function curve; conducting derivation and Fourier transformation on the diffusion function curve in sequence to obtain an SFR value of the area to be measured; or the like, or, alternatively,
the acquiring optical parameters of the virtual reality display device according to the acquired image picture specifically includes: rotating the acquired image picture by a certain angle; after the image picture is rotated, selecting a region to be detected from the checkerboard regions of the image picture, and obtaining bevel edge information of the checkerboard in the region to be detected; performing supersampling on the bevel edge information to obtain an edge diffusion function straight line; and sequentially carrying out derivation and Fourier transformation on the diffusion function straight line to obtain an SFR value of the region to be detected of the image picture.
6. The measurement method of claim 5, wherein the certain angle is in the range of [2 ° -10 ° ].
7. The measurement method according to any one of claims 1 to 4, wherein the optical parameter comprises distortion at full field of view, the checkerboard area comprising a plurality of sub-checkerboards forming a plurality of rows and a plurality of columns; the acquiring optical parameters of the virtual reality display device according to the acquired image picture comprises:
determining coordinate information of six corner points on the outermost periphery line and the column of the checkerboard area based on the acquired image picture; the six corner points include: angular points of four corners and angular points of the middle points of the edges of the head and the tail rows;
substituting the obtained six corner point information into a formula
Figure FDA0002608801540000021
Obtaining TV distortion under the full field of view; or substituting the acquired coordinate information of the six corner points into a formula
Figure FDA0002608801540000022
Obtaining SMIA TV distortion under the full field of view;
wherein, AB and EF represent the vertical size of corner points, and CD represents the vertical size of the corner points in the edges of the head line and the tail line.
8. The measurement method according to claim 7, wherein the determining coordinate information of six corner points on the outermost peripheral row and column of the checkerboard area based on the acquired image picture comprises:
acquiring coordinate information of each identifiable corner point on the checkerboard area based on the acquired image picture;
acquiring side length information of the sub-checkerboards based on coordinate information of each identifiable corner point on the checkerboard area;
and obtaining coordinate information of six corner points on the outermost peripheral row and column in the checkerboard area according to the side length information of the sub-checkerboard and the coordinate information of each recognizable corner point on the checkerboard area.
9. The method as claimed in claim 8, wherein said obtaining side length information of said sub-checkerboard based on coordinate information of each identifiable corner point on said checkerboard area comprises:
selecting one corner point from the identifiable corner points as the target corner point;
acquiring first coordinate information of the target corner point, and additionally acquiring second coordinate information of at least one corner point on a line where the target corner point is located and third coordinate information of at least one corner point on a column where the target corner point is located;
and determining the side length of the sub-checkerboard according to the relative position relationship among the first coordinate information, the second coordinate information and the third coordinate information of the target corner point.
10. The measurement method according to any one of claims 1 to 4, wherein the optical parameters include a field angle, and the acquiring optical parameters of the virtual reality display device according to the acquired image frame includes:
determining the diameter of the pure color circular area based on the image picture;
and determining the field angle of the virtual reality display equipment based on the diameter of the pure color circular area, the size of the image picture and the preset camera field angle corresponding to the size of the image picture.
11. The measurement method of claim 10, wherein the image frame further comprises a background area, the background area is disposed around the solid color circular area, and the background area is a solid color area and has a distinct contrast with the solid color circular area.
12. The measurement method of claim 11, wherein said determining a diameter of said solid color circular region based on said image frame comprises:
identifying the diameter of the pure color circular area according to the contrast between the background area and the pure color circular area; or the like, or, alternatively,
traversing the image picture from any direction in a straight line mode, and obtaining at least one pixel point for identifying a first pixel value of the background area and/or a pixel point for identifying a second pixel value of the pure color circular area on any straight line; determining a straight line which identifies the pixel value II of the pure color circular area and has the largest pixel point; and in the straight line which identifies the maximum pixel points of the second pixel value of the pure color circular area, taking the connecting line of the pixel points of the second pixel value as the diameter of the pure color circular area.
13. The measurement method according to any one of claims 10 to 12, wherein the determining the field angle of the virtual reality display device based on the diameter of the solid color circular region, the size of the image screen, and the preset camera field angle corresponding to the size of the image screen includes:
calculating the field angle of the virtual reality display equipment according to a first formula based on the diameter of the pure color circular area, the size of the image picture and the preset camera field angle corresponding to the size of the image picture; wherein, the first formula is:
Figure FDA0002608801540000031
or the like, or, alternatively,
calculating the field angle of the virtual reality display equipment according to a formula II based on the diameter of the pure color circular area, the size of the image picture and the preset camera field angle corresponding to the size of the image picture; wherein, the formula two is:
Figure FDA0002608801540000032
wherein, PVRIs the diameter of the solid circular area, FVRIs the angle of view, P, of the virtual reality display devicecamFor the size of the image frame, FcamAnd the camera angle is a preset camera angle corresponding to the size of the image picture.
14. The measurement method according to any one of claims 1 to 4, wherein the optical parameters of the virtual reality display device further include: a virtual image distance; when the display module is at the focal position of the camera, the method further comprises:
acquiring the object space distance of the camera;
and taking the obtained object space distance of the camera as a virtual image distance of the virtual reality display equipment.
15. The measurement method of claim 14, wherein the camera is an auto-focus camera; when the display module assembly is in the focus position of camera, the step of acquireing the object space distance of camera includes:
when the display module is located at the focus position of the camera, acquiring a first focus value of the camera;
acquiring an object space distance corresponding to the first focusing numerical value according to the first focusing numerical value and a pre-stored focusing index table; the focusing index table is used for recording the corresponding relation between the object space distance of K groups of different cameras and a second focusing numerical value, and K is more than 1 and is an integer;
and taking the object space distance corresponding to the first focusing value as the object space distance of the camera.
16. The measurement method of claim 14, wherein prior to the step of controlling the camera to focus the display module, the method further comprises:
acquiring the corresponding relation between the object space distance of K groups of different cameras and a second focusing numerical value;
and recording the corresponding relation between the object space distance of the K groups of different cameras and a second focusing numerical value to form the focusing index table.
17. The measurement method according to claim 16, wherein the step of acquiring the correspondence between the object distance and the second focus value of the K groups of different cameras comprises:
when the focusing graphic card is located at any position in the shooting range of the camera, acquiring the spacing distance between the camera and the focusing graphic card;
keeping the spacing distance between the camera and the focusing graphic card unchanged, and controlling the camera to automatically focus the focusing graphic card;
when the focusing picture card is positioned at the focus position of the camera, acquiring a second focusing numerical value of the camera;
taking the spacing distance between the camera and the focusing graphic card as the object space distance of the camera, and establishing the corresponding relation between the object space distance of the camera and the second focusing numerical value;
and changing the spacing distance between the camera and the focusing graphic card, and then returning to execute the step of obtaining the spacing distance between the camera and the focusing graphic card until the corresponding relation between the object space distance of K groups of different cameras and a second focusing numerical value is obtained.
18. The method of claim 17, wherein the step of changing the separation distance between the camera and the focus card comprises:
fixing the position of the camera, and controlling the focusing graph card to move through a displacement device so as to set the interval distance between the camera and the focusing graph card to be one of K different object space distances of the camera.
19. The measurement method according to any one of claims 15 to 18, wherein the step of acquiring the object distance corresponding to the first focus value according to the first focus value and a pre-stored focus index table comprises:
according to the first focusing value, searching a third focusing value with the minimum absolute difference value with the first focusing value in the focusing index table;
and acquiring the object space distance corresponding to the found third focusing numerical value in the focusing index table as the object space distance corresponding to the first focusing numerical value.
20. The measurement method of claim 14, wherein the camera is a manual focus camera; when the display module assembly is in the focus position of camera, the step of acquireing the object space distance of camera includes:
when the display module is positioned at the focal position of the camera, acquiring lens parameters of the camera;
on the premise of keeping the lens parameters of the camera unchanged, controlling the camera to focus the focusing image card;
when the focusing graphic card is positioned at the focus position of the camera, acquiring the spacing distance between the camera and the focusing graphic card;
and taking the spacing distance between the camera and the focusing picture card as the object space distance of the camera.
21. The measurement method according to any one of claims 1 to 4, wherein the step of controlling the camera to focus the display module comprises:
controlling a camera to capture an image picture displayed by the display module;
controlling the virtual reality display equipment to move through a displacement device so that the center of the visual field of the camera is aligned with the center of the display module;
judging whether the captured picture of the camera is clear or not;
and when the picture captured by the camera is clear, determining that the display module is at the focus position of the camera.
22. The measurement method of any one of claims 1 to 4, wherein the optical parameters further comprise: one or more of contrast, dispersion, parallax, angular resolution, brightness uniformity, color uniformity, and color temperature.
23. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the measurement method according to any one of claims 1 to 22 when executing the computer program.
24. A measurement system, comprising: a camera and a terminal device according to claim 23, said terminal device being communicatively connected to said camera.
25. A measuring device, comprising: a processing unit and an imaging unit;
the camera shooting unit is used for focusing a display module of the virtual reality display equipment when the display module is in a lighting state; the display module is used for displaying an image picture to be detected, and the image picture comprises a checkerboard area and a pure-color circular area surrounding the checkerboard area; when the display module is positioned at the focal position of the camera, the image picture displayed by the display module is acquired through the camera unit;
and the processing unit is used for acquiring the optical parameters of the virtual reality display equipment according to the acquired image picture.
CN202010747305.4A 2020-07-29 2020-07-29 Measuring method, system, device and terminal equipment Active CN111947894B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010747305.4A CN111947894B (en) 2020-07-29 2020-07-29 Measuring method, system, device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010747305.4A CN111947894B (en) 2020-07-29 2020-07-29 Measuring method, system, device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111947894A true CN111947894A (en) 2020-11-17
CN111947894B CN111947894B (en) 2022-10-11

Family

ID=73339790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010747305.4A Active CN111947894B (en) 2020-07-29 2020-07-29 Measuring method, system, device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111947894B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465760A (en) * 2020-11-19 2021-03-09 深圳惠牛科技有限公司 Checkerboard corner point identification method, device, equipment and storage medium
CN114593688A (en) * 2022-03-03 2022-06-07 惠州Tcl移动通信有限公司 Three-dimensional measurement method and device based on AR glasses, AR glasses and storage medium
CN114674276A (en) * 2022-03-25 2022-06-28 南京汇川图像视觉技术有限公司 Distance measuring method, machine vision system and storage medium
CN114993616A (en) * 2022-08-02 2022-09-02 歌尔光学科技有限公司 System, method and device for testing diffraction light waveguide
CN115014724A (en) * 2022-08-10 2022-09-06 歌尔光学科技有限公司 System, method and device for testing diffraction light waveguide
CN116337417A (en) * 2023-05-29 2023-06-27 江西联昊光电有限公司 Testing device and testing method for AR and VR optical modules

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017107524A1 (en) * 2015-12-21 2017-06-29 乐视控股(北京)有限公司 Imaging distortion test method and apparatus for virtual reality helmet
CN107333123A (en) * 2016-04-28 2017-11-07 和硕联合科技股份有限公司 Detecting system of focusing and focusing detection method
CN107607295A (en) * 2017-09-30 2018-01-19 华勤通讯技术有限公司 A kind of visual field angle measuring device and method
CN108012147A (en) * 2017-12-22 2018-05-08 歌尔股份有限公司 The virtual image of AR imaging systems is away from test method and device
CN108989794A (en) * 2018-08-01 2018-12-11 上海玮舟微电子科技有限公司 Virtual image information measuring method and system based on head-up-display system
CN109752168A (en) * 2019-01-03 2019-05-14 深圳市亿境虚拟现实技术有限公司 A kind of optical mirror slip detection device for virtual reality device
CN109905700A (en) * 2019-03-08 2019-06-18 歌尔股份有限公司 Virtual display device and its detection method, device, computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017107524A1 (en) * 2015-12-21 2017-06-29 乐视控股(北京)有限公司 Imaging distortion test method and apparatus for virtual reality helmet
CN107333123A (en) * 2016-04-28 2017-11-07 和硕联合科技股份有限公司 Detecting system of focusing and focusing detection method
CN107607295A (en) * 2017-09-30 2018-01-19 华勤通讯技术有限公司 A kind of visual field angle measuring device and method
CN108012147A (en) * 2017-12-22 2018-05-08 歌尔股份有限公司 The virtual image of AR imaging systems is away from test method and device
CN108989794A (en) * 2018-08-01 2018-12-11 上海玮舟微电子科技有限公司 Virtual image information measuring method and system based on head-up-display system
CN109752168A (en) * 2019-01-03 2019-05-14 深圳市亿境虚拟现实技术有限公司 A kind of optical mirror slip detection device for virtual reality device
CN109905700A (en) * 2019-03-08 2019-06-18 歌尔股份有限公司 Virtual display device and its detection method, device, computer readable storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465760A (en) * 2020-11-19 2021-03-09 深圳惠牛科技有限公司 Checkerboard corner point identification method, device, equipment and storage medium
CN114593688A (en) * 2022-03-03 2022-06-07 惠州Tcl移动通信有限公司 Three-dimensional measurement method and device based on AR glasses, AR glasses and storage medium
CN114593688B (en) * 2022-03-03 2023-10-03 惠州Tcl移动通信有限公司 Three-dimensional measurement method and device based on AR (augmented reality) glasses, AR glasses and storage medium
CN114674276A (en) * 2022-03-25 2022-06-28 南京汇川图像视觉技术有限公司 Distance measuring method, machine vision system and storage medium
CN114674276B (en) * 2022-03-25 2024-02-23 南京汇川图像视觉技术有限公司 Distance measurement method, machine vision system, and storage medium
CN114993616A (en) * 2022-08-02 2022-09-02 歌尔光学科技有限公司 System, method and device for testing diffraction light waveguide
CN115014724A (en) * 2022-08-10 2022-09-06 歌尔光学科技有限公司 System, method and device for testing diffraction light waveguide
CN115014724B (en) * 2022-08-10 2022-11-22 歌尔光学科技有限公司 System, method and device for testing diffraction light waveguide
CN116337417A (en) * 2023-05-29 2023-06-27 江西联昊光电有限公司 Testing device and testing method for AR and VR optical modules

Also Published As

Publication number Publication date
CN111947894B (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN111947894B (en) Measuring method, system, device and terminal equipment
CN111595554A (en) Measuring method, system, device and terminal equipment
CN109416744B (en) Improved camera calibration system, object and process
US10924729B2 (en) Method and device for calibration
CN110782499B (en) Calibration method and calibration device for augmented reality equipment and terminal equipment
CN107024339B (en) Testing device and method for head-mounted display equipment
CN101163253B (en) Method and device for searching new color temperature point
CN109559349A (en) A kind of method and apparatus for calibration
Wang et al. Out-of-focus color camera calibration with one normal-sized color-coded pattern
US20210120194A1 (en) Temperature measurement processing method and apparatus, and thermal imaging device
US10319105B2 (en) Method and system for calibrating an image acquisition device and corresponding computer program product
CN109146781A (en) Method for correcting image and device, electronic equipment in laser cutting
CN112595496A (en) Method, device and equipment for detecting defects of near-eye display equipment and storage medium
CN110807814A (en) Camera pose calculation method, device, equipment and storage medium
CN107067441B (en) Camera calibration method and device
EP3291535A1 (en) Method and apparatus for generating data representative of a bokeh associated to light-field data
CN108965863B (en) The control method and device at camera optics center and the alignment of the lens centre VR
CN109741294B (en) Pupil distance testing method and equipment
Wang et al. Combining compound eyes and human eye: a hybrid bionic imaging method for FOV extension and foveated vision
CN106774884B (en) Method and device for measuring lens parameters
CN112361989B (en) Method for calibrating parameters of measurement system through point cloud uniformity consideration
CN115439541A (en) Glass orientation calibration system and method for refraction imaging system
CN111551346B (en) Method, device and system for measuring field angle and computer storage medium
CN114466143A (en) Shooting angle calibration method and device, terminal equipment and storage medium
CN108426702B (en) Dispersion measurement device and method of augmented reality equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant