CN112312127B - Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium - Google Patents

Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium Download PDF

Info

Publication number
CN112312127B
CN112312127B CN202011192413.6A CN202011192413A CN112312127B CN 112312127 B CN112312127 B CN 112312127B CN 202011192413 A CN202011192413 A CN 202011192413A CN 112312127 B CN112312127 B CN 112312127B
Authority
CN
China
Prior art keywords
imaging detection
terminal
video image
video
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011192413.6A
Other languages
Chinese (zh)
Other versions
CN112312127A (en
Inventor
毛艺霖
朱玲
陶荣能
闫莹莹
池晓安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Hangzhou Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011192413.6A priority Critical patent/CN112312127B/en
Publication of CN112312127A publication Critical patent/CN112312127A/en
Application granted granted Critical
Publication of CN112312127B publication Critical patent/CN112312127B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to a video imaging technology and discloses an imaging detection method, an imaging detection device, electronic equipment, an imaging detection system and a storage medium. The imaging detection method comprises the following steps: acquiring a behavior track generated for a terminal; controlling the detection equipment to operate the terminal according to the behavior track; capturing an image of a terminal play picture when the terminal is operated; and analyzing the image and obtaining an imaging detection result. The imaging detection method provided by the embodiment of the invention can replace manual detection, improves the detection accuracy, unifies the detection analysis standard and obtains objective detection results.

Description

Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium
Technical Field
The embodiment of the invention relates to a video imaging technology, in particular to an imaging detection method, an imaging detection device, electronic equipment, an imaging detection system and a storage medium.
Background
Virtual Reality (VR) technology generates a three-dimensional realistic Virtual environment by using a three-dimensional graphics generation technology, a multi-sensor interaction technology, and a high-resolution display technology, and a user needs to enter the Virtual environment through a special interaction device.
The existing VR video imaging detection method is divided into quality analysis for a video source and quality analysis for imaging of a VR terminal, and for the imaging detection of the VR terminal, the existing analysis method adopts a subjective scoring mechanism, a detector interacts with equipment, videos are watched through naked eyes, and the video imaging quality is subjectively analyzed.
Therefore, the existing VR video imaging detection method has the following problems: the subjective scoring mechanism is manually detected by a detector, and according to subjective experience scores, detection analysis results change along with different behaviors and different subjective standards of the detector during detection.
Disclosure of Invention
The invention aims to provide an imaging detection method which can replace manual detection, improve detection accuracy, unify detection analysis standards and obtain objective detection results.
In order to solve the above technical problems, an embodiment of the present invention provides an imaging detection method, including the following steps: acquiring a behavior track generated for a terminal; controlling the detection equipment to operate the terminal according to the behavior track; capturing an image of a terminal play picture when the terminal is operated; and analyzing the image and obtaining an imaging detection result.
The embodiment of the invention also provides an imaging detection device, which comprises: the triggering module is used for acquiring a behavior track generated for the terminal, wherein the behavior track comprises at least one of the following: the method comprises the steps of controlling a detection device to operate a terminal according to a motion track of the terminal and an interaction instruction sent by the terminal; the capturing module is used for capturing an image of a terminal play picture when the terminal is operated; and the analysis module is used for analyzing the image and obtaining an imaging detection result.
The embodiment of the invention also provides an imaging quality analysis system, which comprises: at least one processor; a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the imaging quality analysis method described above.
The embodiment of the invention also provides an imaging detection system, which comprises: the electronic device and the operating device which is in communication connection with the electronic device.
The embodiment of the invention also provides a storage medium which stores a computer program, and the computer program realizes the imaging quality analysis method when being executed by a processor.
Compared with the prior art, the method and the device have the advantages that the operation equipment is controlled to operate the tested terminal according to the behavior track by acquiring the behavior track simulating the operation of the user, the video image played in the operation process of the terminal is captured, and the image is analyzed to obtain the detection result; because the detection process does not need the participation of detection personnel, the problems of uncontrollable operation errors and inconsistent subjective standards caused by manual operation of a terminal and subjective experience scoring of the detection personnel in the prior art can be avoided, and thus the accuracy, the uniformity and the objectivity of imaging detection can be improved.
In addition, analyzing the video image and obtaining an imaging detection result, including: identifying a first video image from the video images; the first video image is a video image played by the tested terminal in the track segment operation process of the operation equipment according to the behavior track representing the preset behavior; and analyzing the first video image and obtaining an imaging detection result. In the embodiment, the video imaging detection analysis result is obtained by only analyzing the video images of the track section representing the preset behavior, and all captured video images are not required to be analyzed, so that the number of images required to be analyzed each time is reduced, the analysis time spent on images with small information is saved, and the detection efficiency is improved.
In addition, analyzing the first video image and obtaining an imaging detection result, including: dividing the first video image into a plurality of groups of first video images according to a plurality of scenes contained in the video played by the tested terminal; wherein, the multiple groups of first video images respectively correspond to multiple scenes; and respectively analyzing the multiple groups of first video images, and respectively obtaining imaging detection results of the multiple groups of first video images. In this embodiment, by dividing the images into multiple groups of first video images according to the scenes and the behavior tracks, track segments representing different preset behaviors in different scenes can be respectively analyzed, so that the imaging effect has more accurate analysis results, and the accuracy of imaging detection is improved.
In addition, when capturing the video image played in the operated process of the tested terminal, the capturing moment is marked for the video image; marking operation time for the behavior track in the process of controlling the operation equipment to operate the tested terminal according to the behavior track; identifying a first video image from among the video images, comprising: and taking the video image with the capturing moment between the starting operation moment and the ending operation moment of the track segment as a first video image. In this embodiment, the capturing time is marked for the video image, the behavior track is marked for the operation time, the first video image is obtained through the relation between the start operation time and the end operation time of the track segment and the capturing time, and the method of marking time information and comparing time information can be used for realizing rapid identification and extraction of the first video image, so that the imaging detection efficiency is improved.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
Fig. 1 is a flowchart of one example of an imaging detection method provided according to a first embodiment of the present invention;
fig. 2 is a flowchart of another example of an imaging detection method provided according to the first embodiment of the present invention;
fig. 3 is a flowchart of still another example of the imaging detection method provided according to the first embodiment of the present invention;
fig. 4 is a flowchart of an example of an imaging detection method provided according to a second embodiment of the present invention;
fig. 5 is a block diagram of an example of an imaging detection apparatus provided according to a third embodiment of the present invention;
fig. 6 is a block diagram of another example of an imaging detection apparatus provided according to a third embodiment of the present invention;
fig. 7 is a block diagram of still another example of an imaging detection apparatus provided according to a third embodiment of the present invention;
FIG. 8 is a schematic diagram of imaging detection electronics provided in accordance with a fourth embodiment of the present invention;
fig. 9 is a schematic diagram of an imaging detection system according to a fifth embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the following detailed description of the embodiments of the present invention will be given with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present invention, numerous technical details have been set forth in order to provide a better understanding of the present application. However, the technical solutions claimed in the present application can be implemented without these technical details and with various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not be construed as limiting the specific implementation of the present invention, and the embodiments can be mutually combined and referred to without contradiction.
A first embodiment of the present invention relates to an imaging detection method. The specific flow is shown in figure 1.
Step 101, obtaining a behavior track simulating user operation;
step 102, controlling an operation device to operate a tested terminal according to a behavior track;
step 103, capturing video images played in the operated process of the tested terminal;
and 104, analyzing the video image and obtaining an imaging detection result.
Compared with the prior art, in the embodiment, the detection equipment is controlled to operate the terminal according to the behavior track by acquiring the behavior track generated for the terminal, the image of the playing picture of the terminal is captured and analyzed, and as the detection process only needs to generate the terminal behavior track and operate the terminal according to the behavior track, the detection process does not need to respectively detect different indexes on various equipment, so that equipment and steps required by detection can be reduced, and the detection process is simplified; in the analysis process, a machine replaces a detector to analyze the terminal through analyzing the captured terminal image, and the objective detection data is obtained by capturing the image analysis image for analysis, so that the detection analysis standard can be unified, and the objective detection result can be obtained.
The imaging detection method in the embodiment is used for detecting imaging quality of the VR terminal device in the VR technology, for example, detecting devices such as an integrated head display device and an external head display device; the imaging quality detection of the existing VR terminal equipment needs to use a plurality of detection devices to repeatedly carry out complex tests under certain conditions to detect part of indexes respectively, or a detector watches videos through naked eyes and subjectively analyzes the imaging quality of the videos to detect the imaging quality of the terminal equipment. The operation device may perform a corresponding operation on the VR terminal device according to the received operation instruction, for example, a robot capable of performing a corresponding operation on the VR terminal device according to the received operation instruction, so as to implement a preset behavior in the preset behavior track. The imaging detection method can be realized by using the imaging detection system, and the imaging quality detection analysis is carried out on the VR terminal device, so that the VR terminal product can be evaluated.
In step 101, the behavior trace simulating the user operation refers to a behavior trace simulating the operation of the user on the tested terminal when the user wears or holds the terminal, where the behavior trace may include: the method comprises the steps of driving a motion track of a tested terminal to move, sending an interaction instruction to the tested terminal and the like, wherein the motion track comprises a track of displacement of the tested terminal in space, and the interaction instruction comprises: the user interacts with the set virtual environment, namely the play picture of the tested terminal through the head movement, does not trigger a triggering event of an object in the virtual environment, does not need a weak interaction instruction of the VR device for logic calculation and real-time rendering according to the gesture information of the user, and triggers the triggering event of the object in the virtual environment through the interaction device to interact with the object in the virtual environment in real time, and needs the VR device to perform the strong interaction instruction of logic calculation and real-time rendering according to the gesture information of the user. Wherein, the behavior trace may be pre-generated by the imaging detection system, and the weak interaction instruction may include: head rotation, etc., the strong interaction instruction may include: gesture motion, touch motion, eye rotation, etc. The behavior track can be described by utilizing time and space, and the behavior track of the tested terminal in moving or rotating or interacting is recorded. For example, the system establishes a space rectangular coordinate system for the movable range of the operation device, in the action track of the tested terminal, the tested terminal keeps motionless in the 0 th to 9 th seconds, translates from the position with the coordinates of (0, 0) to the position with the coordinates of (1, 1) at a constant speed in the 9 th to 13 th seconds, rotates clockwise in the x-axis direction at an angular speed of 0.3rad/s in the 13 th to 16 th seconds, keeps motionless in the 16 th to 19 th seconds, sends a command of touching action in the 19 th to 20 th seconds, rotates downward at a constant speed of 90 degrees in the 20 th to 21 th seconds, and the like.
In step 102, the system controls the operation device to operate the tested terminal according to the behavior trace, the operation device may be a robot, and the robot wears or holds the VR terminal device, and the system controls and drives the robot to perform corresponding operation on the tested terminal according to the obtained behavior trace.
Furthermore, in order to avoid errors when the robot executes, and ensure the accuracy of the detection result, the system can monitor and record the actual behavior track of the operation device, and when the actual behavior track is different from the preset behavior track, the actual behavior track is used in detection analysis. By monitoring and recording the actual behavior track of the operation equipment and using the actual behavior track to carry out detection analysis when the actual behavior track is different from the preset behavior track, the problem that when the actual behavior track is different from the preset behavior track, the preset behavior track is used for analysis, and an error detection analysis result is obtained because the behavior is inconsistent with the picture can be avoided.
In step 103, the video image played during the operation of the tested terminal is captured, and a high-speed camera may be used to capture the video image played in the VR terminal. The VR terminal can be externally connected with a display device when detecting, the display device is configured to display the played video image synchronously with the tested terminal, the system is connected with the display device, and the video image is captured from the display device in a screen capturing mode. The system may create a library of images for the captured video images, after which the captured video images are stored in the library of images. In the process that the operation equipment operates the tested terminal according to the action track, the whole operation process of the action track needs to capture video images, and even if the action track comprises a track section without any action interaction and motion, the video images played in the track section process are captured.
Further, when the captured video image is stored, time information when the video image is captured and information of a spatial position where the terminal is located at the moment of capturing the image, namely, the video image carries time information and spatial position information of the capturing moment, wherein the spatial position information where the terminal is located at the moment of capturing the image can be three-dimensional coordinates of the position where the terminal is located at the moment of capturing.
In step 104, the analysis of the video images may be performed using a single video image, including analysis of the indicators of image sharpness, distortion, angle of view, etc., and analysis of the indicators of MTP delay, tracking freedom, spatial interaction accuracy, etc. may be performed using a plurality of consecutive video images. The captured image can be analyzed by an IQA method for image quality evaluation, and the indexes such as definition, distortion degree and the like can be analyzed by the time information of the video image, and the indexes such as MTP time delay, tracking freedom degree, space interaction precision and the like can be comprehensively compared among a plurality of images.
In one example, as shown in fig. 2, step 101 is specifically:
and step 101-1, generating a behavior track simulating the user behavior for the tested terminal.
When the system acquires the behavior track, the system can acquire the pre-generated behavior track and also can generate the behavior track simulating the user behavior for the tested terminal. And simulating user operation through the behavior track, thereby replacing the manual operation of the terminal by a detection personnel.
In one example, before the control operation device operates the tested terminal according to the behavior trace, as shown in fig. 3, the method further includes:
and 101-2, generating a video for the tested terminal to play, and sending the video to the tested terminal.
The video played by the tested terminal can also be pre-generated or obtained from other channels, such as downloading from a network. The tested terminal plays the generated video in the operated process. The generated video may be different video generated by using different video processing technologies on the same VR material, where the different video processing technologies may use different transmission modes, such as full video transmission, field-of-View Adaptive Streaming, foVAS, etc., or different video encoding and decoding standards, such as h.264, h.265, etc., or different analog projection modes, such as equidistant columnar projection, polyhedral projection, etc. Although step 101-2 is shown in FIG. 3 as being located between steps 101-1 and 102, step 101-1 is actually required to be performed before step 102.
In one example, different behavior tracks may be generated for different terminals under test, or the same behavior track may be obtained for different terminals under test. When different tested terminals acquire the same behavior track for detection, the performances of the different tested terminals on various detection indexes can be compared, so that the imaging quality difference of the different tested terminals can be objectively evaluated.
In one example, the system may generate an analysis report according to the imaging detection result, where the report includes a data presentation of each detected index, and the data presentation may include one-dimensional presentation data or two-dimensional presentation data. The one-dimensional presentation data may include: definition of terminal video imaging, full view angle resolution, view angle, photon motion MTP time delay, supported video transmission technology, such as whether FoV transmission is supported, a supported VR encoding and decoding mode, supported strong interaction action, interaction freedom degree and precision and the like. The two-dimensional presentation data may include: a variation curve graph of MTP time delay along with angular displacement, a variation curve graph of MTP time delay along with movement angular velocity, and the like.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this patent to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
A second embodiment of the present invention relates to an imaging detection method, as shown in fig. 4, including:
step 201, obtaining a behavior track simulating user operation;
step 202, controlling an operation device to operate a tested terminal according to a behavior track;
step 203, capturing video images played in the operated process of the tested terminal;
step 204, identifying a first video image from the video images; the first video image is a video image played by the tested terminal in the track segment operation process of the operation equipment according to the behavior track representing the preset behavior; and analyzing the first video image to obtain an imaging detection result.
The contents of step 201, step 202, and step 203 in this embodiment are substantially the same as those of step 101, step 102, and step 103 in the first embodiment, and will not be described again;
preferably, the present embodiment may also be combined with the embodiment shown in fig. 2 or fig. 3, and step 104 in fig. 2 or fig. 3 may be implemented in a manner of step 204.
In step 204, the system identifies a first video image in the video images, analyzes the first video image, and obtains an imaging detection result, where the first video image refers to a video image played by the tested terminal in the operation process of the operation device according to the track segment representing the preset behavior in the behavior track. The system obtains the video imaging detection analysis result by only analyzing the video images of the track section representing the preset behavior, and does not need to analyze all captured video images, so that the number of images needing to be analyzed is reduced, the analysis time of images with less information is saved, and the detection efficiency is improved.
Further, the first video images generated by different track segments representing different preset behaviors in the behavior track can be divided into a plurality of groups, and different groups of first video images corresponding to different track segments representing different preset behaviors are respectively analyzed, namely, the first video images are grouped according to different behaviors and respectively analyzed, so that imaging analysis can be correspondingly analyzed according to different behaviors such as head displacement, head rotation and the like, and more accurate analysis results are obtained.
In one example, the first video image may be a video image corresponding to a track segment that only includes one preset behavior in the behavior track, or may be a video image corresponding to a track segment that includes a plurality of preset behaviors in the behavior track, where the plurality of preset behaviors may be preset behaviors in which the track segments are continuous, or may be a plurality of preset behaviors in which the track segments are not continuous.
In one example, before capturing the video image played in the operation process of the tested terminal, the system further performs scene information extraction on the video played by the terminal according to the behavior track, and marks the scenes corresponding to the track segments representing different behaviors in the behavior track. When the system captures the video image played in the operation process of the tested terminal, the scene where the captured video image is located is marked.
In one example, the system may further divide the first video image into a plurality of groups of first video images according to a plurality of scenes contained in the video played by the tested terminal; wherein, the multiple groups of first video images respectively correspond to multiple scenes; and respectively analyzing the multiple groups of first video images, and respectively obtaining imaging detection results of the multiple groups of first video images. For example, the same forward running action is performed, the imaging effect obtained on the grass is different from that obtained before the fence, when the grass runs forward, the video image in the user field of view should change rapidly along with the forward running action of the user, and the forward running action of the fence is blocked by the fence, so that the imaging of objects in the user field of view cannot change rapidly along with the forward running action. Thus, the former requires a greater amount of computation in the presentation of the image than the latter, which may result in a larger MTP delay for forward running of the scene on the grass than for the scene before the fence. And the multiple groups of first video images are respectively analyzed according to different scenes to respectively obtain imaging detection results, so that more accurate analysis results can be provided for the imaging effect, and the accuracy of imaging detection is improved.
And the first video images corresponding to the track segments representing different behaviors in the same scene can be compared and analyzed to obtain imaging analysis of the same scene in different behaviors, so that the imaging effect of the scene is more comprehensively detected. The track segments representing different behaviors can be different motion tracks of the tested terminal, or different interaction instructions can be sent by the tested terminal.
In one example, the system also marks the capturing moment for the video image when capturing the video image played by the tested terminal in the operated process; marking operation time for the behavior track in the process of controlling the operation equipment to operate the tested terminal according to the behavior track; identifying a first video image from among the video images, comprising: and taking the video image with the capturing moment between the starting operation moment and the ending operation moment of the track segment as a first video image. In this embodiment, the capturing time is marked for the video image, the behavior track is marked for the operation time, the first video image is obtained through the relation between the start operation time and the end operation time of the track segment and the capturing time, and the method of marking time information and comparing time information can be used for realizing rapid identification and extraction of the first video image, so that the imaging detection efficiency is improved.
In one example, the system may further extract and analyze the first video image respectively according to the video processing technologies on the video images generated by the same track segment in the same scene, so that the video may have objective detection results on the imaging effect of the terminal after being processed by the different video processing technologies.
In one example, analyzing the first video image and obtaining an imaging detection result includes: acquiring the initial operation moment of the terminal in the track section; and acquiring the moment when the terminal completely displays a video image after the initial operation moment, calculating the time difference between the moment and the moment, and taking the time difference as the photon motion MTP time delay in the imaging detection result.
In one example, the analyzing the first video image and obtaining the imaging detection result further includes: and detecting the tracking freedom degree of the terminal. The behavior track of the terminal can comprise a series of track segments which displace and send weak interaction instructions, and the system analyzes whether the image captured correspondingly by each track segment generates corresponding change or not by acquiring a first video image corresponding to the series of track segments, so that the tracking degree of freedom supported by the terminal is judged. For example, in the prior art, there are terminal devices supporting DoF with 3 degrees of freedom and DoF with 6 degrees of freedom, during detection, the detected operation device may control the terminal to rotate upward and downward, tilt leftward and rightward, rotate forward and backward, left and right, and translate up and down, and then the first video image of the track segment is acquired and analyzed, if only the corresponding image of the first video image has corresponding image change when the terminal rotates upward and downward, tilts leftward and rightward, rotates forward and backward, and when the acquired video image does not change when the terminal rotates forward and backward, the degree of freedom supported by the detected terminal is 3DoF, and if the corresponding video image of the detected terminal has corresponding change when the detected terminal performs the operation, the degree of freedom supported by the detected terminal is 6DoF.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this patent to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
A third embodiment of the present invention relates to an imaging detection apparatus, as shown in fig. 5, including:
the triggering module 301 is configured to obtain a behavior trace generated for the terminal, where the behavior trace includes at least one of the following: the method comprises the steps of controlling a detection device to operate a terminal according to a motion track of the terminal and an interaction instruction sent by the terminal;
the capturing module 302 is configured to capture an image of a terminal play screen when the terminal is operated;
and the analysis module 303 is used for analyzing the image and obtaining an imaging detection result.
In one example, as shown in fig. 6, the imaging detection apparatus further includes: the video module 304 is configured to generate a video for playing by the tested terminal, and send the video to the tested terminal; the tested terminal plays the video in the operated process.
In one example, the video module 304 is further configured to extract scene information of the video played by the terminal according to the behavior trace, and transmit the behavior trace segment and the corresponding scene information to the analysis module 303.
Further, the video module is further configured to extract all scene information in the played video after the video for the tested terminal to play is generated, and the analysis module performs scene recognition on the captured video imaging according to the extracted video scene information, where the captured video image information carries the scene information of the video image.
In one example, as shown in fig. 7, the imaging detection apparatus further includes: the simulation module 305 is configured to generate a behavior trace for simulating the user operation for the terminal.
It is to be noted that this embodiment is an example of a device corresponding to the first embodiment, and can be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and in order to reduce repetition, a detailed description is omitted here. Accordingly, the related art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module in this embodiment is a logic module, and in practical application, one logic unit may be one physical unit, or may be a part of one physical unit, or may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, units that are not so close to solving the technical problem presented by the present invention are not introduced in the present embodiment, but this does not indicate that other units are not present in the present embodiment.
A fourth embodiment of the present invention relates to an electronic device, as shown in fig. 8, including: at least one processor 401; a memory 402 communicatively coupled to the at least one processor; the memory 402 stores instructions executable by the at least one processor 401, and the instructions are executed by the at least one processor 401 to perform the configuration management method described above.
Where the memory 402 and the processor 401 are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors 401 and the memory 402 together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The information processed by the processor 401 is transmitted over a wireless medium via an antenna, which in turn receives and transmits information to the processor 401.
The processor 401 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 402 may be used to store information used by the processor in performing operations.
A fifth embodiment of the present invention relates to an imaging detection system, as shown in fig. 9, including: an electronic device 501, and an operating device 502 communicatively connected to the electronic device.
In which the electronic device 501 is the same electronic device as the fourth embodiment, the operation device 502 can execute an operation instruction issued by the electronic device.
In this embodiment, the operation device 502 is a robot capable of implementing a preset behavior trace.
The electronic device 501 may connect the operation device through a wired network of a LAN, WAN, or VAN or a wireless network such as bluetooth, NFC, or the like to transmit an imaging detection instruction.
A sixth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program implements the above-described method embodiments when executed by a processor.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described herein. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of carrying out the invention and that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (8)

1. An imaging detection method, comprising:
acquiring a behavior track simulating user operation;
controlling an operation device to operate the tested terminal according to the behavior track;
capturing a video image played in the operated process of the tested terminal;
analyzing the video image and obtaining an imaging detection result;
the analyzing the video image and obtaining an imaging detection result comprises the following steps:
identifying a first video image from the video images; the first video image is a video image played by the tested terminal in the operation process of the operation equipment according to the track segment representing the preset behavior in the behavior track;
analyzing the first video image and obtaining an imaging detection result;
the analyzing the first video image and obtaining an imaging detection result includes:
dividing the first video image into a plurality of groups of first video images according to a plurality of scenes contained in the video played by the tested terminal; wherein the plurality of sets of first video images correspond to the plurality of scenes, respectively;
analyzing the multiple groups of first video images respectively, and obtaining imaging detection results of the multiple groups of first video images respectively;
wherein the terminal under test is a VR terminal.
2. The imaging detection method according to claim 1, wherein the capturing time is marked for the video image when the video image played during the operation of the terminal under test is captured; the control operation equipment marks the operation time for the behavior track in the process of operating the tested terminal according to the behavior track; the identifying a first video image from the video images includes:
and taking a video image with the capturing time between the starting operation time and the ending operation time of the track segment as the first video image.
3. The imaging detection method according to claim 2, wherein the analyzing the first video image and obtaining an imaging detection result includes:
acquiring the initial operation time of the terminal in the track section;
and acquiring the capturing time of the first video image which is completely displayed first after the initial operation time by the terminal, calculating the time difference between the capturing time and the first video image, and taking the time difference as photon motion MTP time delay in the imaging detection result.
4. The imaging detection method according to claim 1, wherein before the control operation device operates the terminal under test in accordance with the behavior trace, further comprising:
generating a video for the tested terminal to play, and sending the video to the tested terminal; and the tested terminal plays the video in the operated process.
5. An imaging detection apparatus, comprising:
the triggering module is used for acquiring a behavior track generated for the terminal, wherein the behavior track comprises at least one of the following: the motion trail of the terminal and the interaction instruction sent by the terminal control the operation equipment to operate the tested terminal according to the behavior trail;
the capturing module is used for capturing video images played in the operation process of the tested terminal;
the analysis module is used for analyzing the video image and obtaining an imaging detection result;
the analyzing the video image and obtaining an imaging detection result comprises the following steps:
identifying a first video image from the video images; the first video image is a video image played by the tested terminal in the operation process of the operation equipment according to the track segment representing the preset behavior in the behavior track;
analyzing the first video image and obtaining an imaging detection result;
the analyzing the first video image and obtaining an imaging detection result includes:
dividing the first video image into a plurality of groups of first video images according to a plurality of scenes contained in the video played by the tested terminal; wherein the plurality of sets of first video images correspond to the plurality of scenes, respectively;
analyzing the multiple groups of first video images respectively, and obtaining imaging detection results of the multiple groups of first video images respectively;
wherein the terminal under test is a VR terminal.
6. An electronic device, comprising:
at least one processor;
a memory communicatively coupled to the at least one processor;
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the imaging detection method as claimed in any one of claims 1 to 4.
7. An imaging detection system, comprising: the electronic device of claim 6, and an operating device communicatively coupled to the electronic device.
8. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the imaging detection method according to any one of claims 1 to 4.
CN202011192413.6A 2020-10-30 2020-10-30 Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium Active CN112312127B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011192413.6A CN112312127B (en) 2020-10-30 2020-10-30 Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011192413.6A CN112312127B (en) 2020-10-30 2020-10-30 Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium

Publications (2)

Publication Number Publication Date
CN112312127A CN112312127A (en) 2021-02-02
CN112312127B true CN112312127B (en) 2023-07-21

Family

ID=74332873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011192413.6A Active CN112312127B (en) 2020-10-30 2020-10-30 Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium

Country Status (1)

Country Link
CN (1) CN112312127B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010143870A2 (en) * 2009-06-08 2010-12-16 주식회사 케이티 Method and apparatus for monitoring video quality
CN106249918A (en) * 2016-08-18 2016-12-21 南京几墨网络科技有限公司 Virtual reality image display packing, device and apply its terminal unit
CN107979754A (en) * 2016-10-25 2018-05-01 百度在线网络技术(北京)有限公司 A kind of test method and device based on camera application
CN208285458U (en) * 2018-06-08 2018-12-25 深圳惠牛科技有限公司 The detection device of display module image quality
CN110036636A (en) * 2016-12-14 2019-07-19 高通股份有限公司 Viewport perceived quality metric for 360 degree of videos
CN110460831A (en) * 2019-08-22 2019-11-15 京东方科技集团股份有限公司 Display methods, device, equipment and computer readable storage medium
CN111093069A (en) * 2018-10-23 2020-05-01 大唐移动通信设备有限公司 Quality evaluation method and device for panoramic video stream
CN111131735A (en) * 2019-12-31 2020-05-08 歌尔股份有限公司 Video recording method, video playing method, video recording device, video playing device and computer storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8199186B2 (en) * 2009-03-05 2012-06-12 Microsoft Corporation Three-dimensional (3D) imaging based on motionparallax
US20170164026A1 (en) * 2015-12-04 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and device for detecting video data
US20170287097A1 (en) * 2016-03-29 2017-10-05 Ati Technologies Ulc Hybrid client-server rendering in a virtual reality system
CN107995482B (en) * 2016-10-26 2021-05-14 腾讯科技(深圳)有限公司 Video file processing method and device
CN106441810B (en) * 2016-12-16 2019-07-26 捷开通讯(深圳)有限公司 The detection device and detection method of the delay time of VR equipment
US9807384B1 (en) * 2017-01-13 2017-10-31 Optofidelity Oy Method, apparatus and computer program product for testing a display
KR20190057761A (en) * 2017-11-20 2019-05-29 경기대학교 산학협력단 Apparatus and method for performance analysis of virtual reality head mounted display motion to photon latency
CN107820075A (en) * 2017-11-27 2018-03-20 中国计量大学 A kind of VR equipment delayed test devices based on light stream camera
CN109147059B (en) * 2018-09-06 2020-09-25 联想(北京)有限公司 Method and equipment for determining delay value
CN110180185B (en) * 2019-05-20 2023-08-18 联想(上海)信息技术有限公司 Time delay measurement method, device, system and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010143870A2 (en) * 2009-06-08 2010-12-16 주식회사 케이티 Method and apparatus for monitoring video quality
CN106249918A (en) * 2016-08-18 2016-12-21 南京几墨网络科技有限公司 Virtual reality image display packing, device and apply its terminal unit
CN107979754A (en) * 2016-10-25 2018-05-01 百度在线网络技术(北京)有限公司 A kind of test method and device based on camera application
CN110036636A (en) * 2016-12-14 2019-07-19 高通股份有限公司 Viewport perceived quality metric for 360 degree of videos
CN208285458U (en) * 2018-06-08 2018-12-25 深圳惠牛科技有限公司 The detection device of display module image quality
CN111093069A (en) * 2018-10-23 2020-05-01 大唐移动通信设备有限公司 Quality evaluation method and device for panoramic video stream
CN110460831A (en) * 2019-08-22 2019-11-15 京东方科技集团股份有限公司 Display methods, device, equipment and computer readable storage medium
CN111131735A (en) * 2019-12-31 2020-05-08 歌尔股份有限公司 Video recording method, video playing method, video recording device, video playing device and computer storage medium

Also Published As

Publication number Publication date
CN112312127A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
US10685489B2 (en) System and method for authoring and sharing content in augmented reality
US20210065583A1 (en) Weld Training Systems to Synchronize Weld Data for Presentation
US9075434B2 (en) Translating user motion into multiple object responses
CN106598229B (en) Virtual reality scene generation method and device and virtual reality system
US9799143B2 (en) Spatial data visualization
CN106030457B (en) Tracking objects during a process
US20200242280A1 (en) System and methods of visualizing an environment
JP7042561B2 (en) Information processing equipment, information processing method
US20160048515A1 (en) Spatial data processing
Warburton et al. Measuring motion-to-photon latency for sensorimotor experiments with virtual reality systems
CN105373011B (en) Detect the real-time emulation system and computer of electro-optical tracking device
US20160313799A1 (en) Method and apparatus for identifying operation event
US11282222B2 (en) Recording medium, object detection apparatus, object detection method, and object detection system
CN108228124B (en) VR vision test method, system and equipment
CN111309203A (en) Method and device for acquiring positioning information of mouse cursor
CN114489327B (en) Sequence analysis method and system for reaction behavior based on man-machine interaction
JP5356036B2 (en) Group tracking in motion capture
CN112312127B (en) Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium
CN113312951B (en) Dynamic video target tracking system, related method, device and equipment
US11703682B2 (en) Apparatus configured to display shared information on plurality of display apparatuses and method thereof
CN104811695A (en) Testing method and device for set top box device
CN112828895B (en) Robot simulation system
Fucci et al. Measuring audio-visual latencies in virtual reality systems
CN104520804A (en) A method, a server and a pointing device for enhancing presentations
EP4083805A1 (en) System and method of error logging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant