CN112312127A - Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium - Google Patents

Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium Download PDF

Info

Publication number
CN112312127A
CN112312127A CN202011192413.6A CN202011192413A CN112312127A CN 112312127 A CN112312127 A CN 112312127A CN 202011192413 A CN202011192413 A CN 202011192413A CN 112312127 A CN112312127 A CN 112312127A
Authority
CN
China
Prior art keywords
terminal
imaging detection
video image
video
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011192413.6A
Other languages
Chinese (zh)
Other versions
CN112312127B (en
Inventor
毛艺霖
朱玲
陶荣能
闫莹莹
池晓安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Hangzhou Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011192413.6A priority Critical patent/CN112312127B/en
Publication of CN112312127A publication Critical patent/CN112312127A/en
Application granted granted Critical
Publication of CN112312127B publication Critical patent/CN112312127B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention relates to a video imaging technology, and discloses an imaging detection method, an imaging detection device, electronic equipment, an imaging detection system and a storage medium. The imaging detection method comprises the following steps: acquiring a behavior track generated for the terminal; controlling the detection equipment to operate the terminal according to the behavior track; capturing an image of a terminal playing picture when the terminal is operated; and analyzing the image and obtaining an imaging detection result. The imaging detection method provided by the embodiment of the invention can replace manual detection, improve the detection accuracy, unify the detection analysis standard and obtain an objective detection result.

Description

Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium
Technical Field
The embodiment of the invention relates to a video imaging technology, in particular to an imaging detection method, an imaging detection device, electronic equipment, an imaging detection system and a storage medium.
Background
Virtual Reality (VR) technology utilizes three-dimensional graphics generation technology, multi-sensor interaction technology, and high-resolution display technology to generate a three-dimensional realistic Virtual environment, and a user can enter the Virtual environment through special interaction equipment.
The existing VR video imaging detection method comprises quality analysis aiming at a video source and quality analysis aiming at imaging of a VR terminal, for imaging detection of the VR terminal, a subjective scoring mechanism is adopted in the existing analysis method, detection personnel and equipment interact with each other, videos are watched through naked eyes, and video imaging quality is subjectively analyzed.
Therefore, the existing VR video imaging detection method has the following problems: the subjective scoring mechanism is used for manual detection by detection personnel, and according to subjective experience scoring, detection analysis results change along with different behaviors and different subjective standards of the detection personnel during detection.
Disclosure of Invention
The embodiment of the invention aims to provide an imaging detection method which can replace manual detection, improve the detection accuracy, unify the detection analysis standard and obtain an objective detection result.
In order to solve the above technical problem, an embodiment of the present invention provides an imaging detection method, including: acquiring a behavior track generated for the terminal; controlling the detection equipment to operate the terminal according to the behavior track; capturing an image of a terminal playing picture when the terminal is operated; and analyzing the image and obtaining an imaging detection result.
An embodiment of the present invention also provides an imaging detection apparatus, including: the trigger module is used for acquiring a behavior track generated for the terminal, and the behavior track comprises at least one of the following items: the motion track of the terminal and an interactive instruction sent by the terminal control the detection equipment to operate the terminal according to the behavior track; the capturing module is used for capturing images of terminal playing pictures when the terminal is operated; and the analysis module is used for analyzing the image and obtaining an imaging detection result.
Embodiments of the present invention also provide an imaging quality analysis system, including: at least one processor; a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the imaging quality analysis method described above.
Embodiments of the present invention also provide an imaging detection system, comprising: the electronic equipment and the operating equipment which is in communication connection with the electronic equipment are also provided.
Embodiments of the present invention also provide a storage medium storing a computer program which, when executed by a processor, implements the imaging quality analysis method described above.
Compared with the prior art, the method and the device have the advantages that the behavior track simulating the user operation is obtained, the operation device is controlled to operate the tested terminal according to the behavior track, the video image played in the terminal operation process is captured, and the image is analyzed to obtain the detection result; because the detection process does not need detection personnel to participate, the problems of uncontrollable operation errors and inconsistent subjective standards caused by manual operation of the detection personnel on the terminal and subjective experience scoring in the prior art can be solved, and the accuracy, the uniformity and the objectivity of imaging detection can be improved.
In addition, analyzing the video image and obtaining an imaging detection result, the method comprises the following steps: identifying a first video image from the video images; the first video image is a video image played by the tested terminal in the operation process of the operation equipment according to a track segment representing a preset behavior in the behavior track; and analyzing the first video image and obtaining an imaging detection result. In the embodiment, the video imaging detection analysis result is obtained by analyzing only the video image representing the track segment of the preset behavior, and all captured video images are not required to be analyzed, so that the number of images to be analyzed at each time is reduced, the analysis time spent on the images with small information carrying capacity is saved, and the detection efficiency is improved.
In addition, analyzing the video image and obtaining an imaging detection result comprises: dividing the first video image into a plurality of groups of first video images according to a plurality of scenes contained in a video played by a tested terminal; the multiple groups of first video images respectively correspond to multiple scenes; and analyzing the multiple groups of first video images respectively, and obtaining imaging detection results of the multiple groups of first video images respectively. In the embodiment, the multiple groups of first video images are divided according to the scenes and the behavior tracks, so that the track sections representing different preset behaviors in different scenes can be respectively analyzed, the imaging effect can have a more accurate analysis result, and the imaging detection accuracy is improved.
In addition, when capturing a video image played in the operation process of the tested terminal, marking the capturing time for the video image; controlling the operation equipment to mark operation time for the behavior track in the process of operating the tested terminal according to the behavior track; identifying a first video image from the video images, comprising: and taking a video image with the capture time between the starting operation time and the ending operation time of the track segment as a first video image. In the embodiment, the capturing time is marked for the video image, the operation time is marked for the behavior track, the first video image is obtained according to the relation between the initial operation time and the capturing time of the track segment, and the first video image can be rapidly identified and extracted by using methods of marking time information and comparing the time information, so that the imaging detection efficiency is improved.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a flowchart of an example of an imaging detection method provided according to a first embodiment of the present invention;
fig. 2 is a flowchart of another example of an imaging detection method provided according to the first embodiment of the present invention;
FIG. 3 is a flow chart of yet another example of an imaging detection method provided in accordance with the first embodiment of the present invention;
FIG. 4 is a flow chart of one example of an imaging detection method provided in accordance with a second embodiment of the present invention;
fig. 5 is a structural view of an example of an image sensing apparatus provided according to a third embodiment of the present invention;
fig. 6 is a structural view of another example of an imaging detection apparatus provided according to a third embodiment of the present invention;
fig. 7 is a structural view of still another example of an imaging detection apparatus provided in accordance with a third embodiment of the present invention;
FIG. 8 is a schematic diagram of imaging detection electronics provided in accordance with a fourth embodiment of the present invention;
fig. 9 is a schematic view of an imaging detection system provided in accordance with a fifth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
A first embodiment of the present invention relates to an imaging detection method. The specific flow is shown in figure 1.
Step 101, acquiring a behavior track simulating user operation;
step 102, controlling an operation device to operate a tested terminal according to a behavior track;
103, capturing a video image played in the operation process of the tested terminal;
and 104, analyzing the video image and obtaining an imaging detection result.
Compared with the prior art, in the embodiment, the behavior track generated for the terminal is obtained, the detection device is controlled to operate the terminal according to the behavior track, the image of the terminal playing picture is captured, and the image is analyzed; in the analysis process, a machine replaces a tester to carry out detection and analysis on the terminal by analyzing the captured terminal picture, and the captured image analysis picture is analyzed by obtaining objective detection data, so that the detection and analysis standards can be unified, and objective detection results can be obtained.
The imaging detection method in the embodiment is used for detecting the imaging quality of VR terminal equipment in VR technology, for example, detecting integrated head display equipment, external head display equipment and other equipment; the imaging quality detection of the existing VR terminal equipment needs to use a plurality of detection equipment to repeatedly carry out complex tests under certain conditions to respectively detect part of indexes, or detection personnel watch videos through naked eyes and subjectively analyze the imaging quality of the videos to detect the imaging quality of the terminal equipment. The operating device may perform corresponding operation on the VR terminal device according to the received operating instruction, for example, the operating device may perform corresponding operation on the VR terminal device according to the received operating instruction, so as to implement a preset behavior in the preset behavior trajectory. The imaging detection method can be realized by utilizing the imaging detection system, imaging quality detection analysis is carried out on VR terminal equipment, and therefore the VR terminal product can be evaluated.
In step 101, simulating a behavior trace of a user operation refers to simulating an operation behavior trace of the user on the terminal to be tested when the user wears or holds the terminal, where the behavior trace may include: the motion trail of the measured terminal is driven to move, and the interactive instruction is sent to the measured terminal, wherein the motion trail comprises the trail of the displacement of the measured terminal in the space, and the interactive instruction comprises: the user interacts with a set virtual environment, namely a playing picture of a tested terminal through head movement, a trigger event of an object in the virtual environment is not triggered, weak interaction instructions for logic calculation and real-time rendering of VR equipment according to posture information of the user are not needed, and strong interaction instructions for logic calculation and real-time rendering of the VR equipment according to posture information of the user are needed. Wherein the behavior track may be generated in advance by the imaging detection system, and the weak interaction instruction may include: head rotation, etc., the strong interaction instructions may include: gesture motion, touch motion, eye rotation, and the like. The behavior track can be characterized by using time and space, and the behavior track of the tested terminal in moving or rotating or interacting is recorded. For example, the system establishes a spatial rectangular coordinate system for the movable range of the operating device, in the action track of the measured terminal, the measured terminal is kept still from the 0 th to the 9 th seconds, is translated from the position with the coordinates of (0, 0, 0) to the position with the coordinates of (1, 1, 1) at a constant speed from the 9 th to the 13 th seconds, is rotated clockwise at a constant speed of 0.3rad/s in the x-axis direction at an angular speed from the 13 th to the 16 th seconds, is kept still from the 16 th to the 19 th seconds, sends a touch action instruction from the 19 th to the 20 th seconds, is accelerated downwards at the 20 th to the 21 th seconds by 90 degrees, and the like.
In step 102, the system controls the operating device to operate the terminal to be tested according to the behavior track, the operating device may be a robot, the VR terminal device is worn or held by the robot, and the system controls and drives the robot to perform corresponding operation on the terminal to be tested according to the acquired behavior track.
Furthermore, in order to avoid errors in the execution of the robot and ensure the accuracy of the detection result, the system can monitor and record the actual behavior track of the operating device, and when the actual behavior track is different from the preset behavior track, the actual behavior track is used in the detection and analysis. By monitoring and recording the actual behavior track of the operating device and using the actual behavior track for detection and analysis when the actual behavior track is different from the preset behavior track, the situation that the preset behavior track is used for analysis when the actual behavior track is different from the preset behavior track, and wrong detection and analysis results are obtained because the behavior does not conform to the picture can be avoided.
In step 103, capturing the video image played in the operation process of the tested terminal, and capturing the video image played in the VR terminal by using a high-speed camera. And a display device can be externally connected to the VR terminal during detection, the display device is configured to display the played video image synchronously with the detected terminal, and the system is connected with the display device to capture the video image from the display device in a screen capture mode. The system may create an image library for the captured video images and store the captures in the image library after the video images are captured. In the process that the operation equipment operates the tested terminal according to the behavior track, the video image needs to be captured in the operation process of the whole behavior track, and even if the behavior track comprises a track segment without any action interaction and motion, the video image played in the track segment process needs to be captured.
Further, when the captured video image is stored, time information when the video image is captured and information of a spatial position where the terminal is located at the time of capturing the image are also stored, that is, the video image carries the time information and the spatial position information at the time of capturing the image, wherein the spatial position information at the time of capturing the image where the terminal is located may be a three-dimensional coordinate of the location where the terminal is located at the time of capturing the image.
In step 104, for the analysis of the video images, a single video image may be used to perform analysis including the indexes of image definition, distortion degree, angle of view, etc., and a plurality of consecutive video images may be used to perform analysis of the indexes of MTP delay, tracking freedom, spatial interaction precision, etc. The captured image can be analyzed by the image quality evaluation IQA method according to indexes such as definition, distortion degree and the like, and the indexes such as MTP time delay, tracking freedom degree, space interaction precision and the like can be analyzed by comprehensively comparing time information of the video image and a plurality of images.
In one example, as shown in fig. 2, step 101 specifically includes:
step 101-1, generating a behavior track simulating user behavior for the tested terminal.
When the system acquires the behavior track, the behavior track which is generated in advance can be acquired, and the behavior track which simulates the user behavior can be generated for the tested terminal. The user operation is simulated through the behavior track, so that the terminal is operated manually by a detector.
In an example, before controlling the operation device to operate the terminal under test according to the behavior trace, as shown in fig. 3, the method further includes:
and 101-2, generating a video for the tested terminal to play, and sending the video to the tested terminal.
The video played by the terminal to be tested may also be generated in advance, or obtained from other channels, for example, downloaded from a network. And the tested terminal plays the generated video in the operated process. The generated video may be different videos generated by using different video processing techniques for the same VR material, where the different video processing techniques may be different transmission modes, such as full video transmission, Field-of-View Adaptive Streaming (fov), and the like, or different video encoding and decoding standards, such as h.264, h.265, and the like, or different analog projection modes, such as equidistant columnar projection, polyhedral projection, and the like. Although step 101-2 is located between steps 101-1 and 102 in fig. 3, step 101-1 only needs to occur before step 102.
In one example, different behavior tracks may be generated for different terminals to be tested, or the same behavior track may be obtained for different terminals to be tested. When different tested terminals acquire the same behavior track to detect, the performance of different tested terminals on various detection indexes can be compared, so that the imaging quality difference of different tested terminals can be evaluated more objectively.
In one example, the system may generate an analysis result report according to the imaging detection result, where the report includes data presentation of each detected indicator, and may include one-dimensional presentation data or two-dimensional presentation data. The one-dimensional presentation data may include: the definition, the full-view angle resolution, the view angle, the photon motion MTP time delay of the terminal video imaging, the supported video transmission technology, such as whether the FOV transmission is supported, the supported VR coding and decoding mode, the supported strong interaction action, the interaction degree of freedom and the precision, etc. The two-dimensional presentation data may include: a change curve graph of the MTP time delay along with the angular displacement, a change curve graph of the MTP time delay along with the movement angular rate and the like.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A second embodiment of the present invention relates to an imaging detection method, as shown in fig. 4, including:
step 201, acquiring a behavior track simulating user operation;
step 202, controlling an operation device to operate the tested terminal according to the behavior track;
step 203, capturing a video image played in the operation process of the tested terminal;
step 204, identifying a first video image from the video images; the first video image is a video image played by the tested terminal in the operation process of the operation equipment according to a track segment representing a preset behavior in the behavior track; and analyzing the first video image and obtaining an imaging detection result.
The contents of step 201, step 202, and step 203 in this embodiment are substantially the same as those of step 101, step 102, and step 103 in the first embodiment, and are not described again;
preferably, this embodiment can also be combined with the embodiment shown in fig. 2 or fig. 3, and step 104 in fig. 2 or fig. 3 can be embodied as step 204.
In step 204, the system identifies a first video image in the video images, analyzes the first video image and obtains an imaging detection result, wherein the first video image refers to a video image played by the terminal to be tested during the operation process of the operation device according to a track segment representing a preset behavior in the behavior track. The system obtains the video imaging detection analysis result by only analyzing the video images representing the track segments of the preset behaviors without analyzing all captured video images, reduces the number of images to be analyzed, saves the analysis time of the images with less information and improves the detection efficiency.
Furthermore, the first video images generated by different track segments representing different preset behaviors in the behavior track can be divided into multiple groups, and the different groups of first video images corresponding to the different track segments representing the different preset behaviors are respectively analyzed, namely, the first video images are grouped according to the different behaviors and are respectively analyzed, so that the imaging analysis can perform corresponding analysis aiming at the different behaviors such as head displacement, head rotation and the like, and more accurate analysis results are obtained.
In an example, the first video image may be a video image corresponding to a track segment representing only one preset behavior in the behavior track, or may be a video image corresponding to a track segment representing multiple preset behaviors in the behavior track, where the multiple preset behaviors may be preset behaviors in which the track segment is continuous, or multiple preset behaviors in which the track segment is not continuous.
In one example, before capturing a video image played in the operation process of the tested terminal, the system further extracts scene information of the video played by the terminal according to the behavior track, and marks scenes corresponding to track segments representing different behaviors in the behavior track. When the system captures the video image played in the operation process of the tested terminal, the scene where the system is located is marked for the captured video image.
In one example, the system may further divide the first video image into a plurality of groups of first video images according to that a video played by the terminal to be tested includes a plurality of scenes; the multiple groups of first video images respectively correspond to multiple scenes; and analyzing the multiple groups of first video images respectively, and obtaining imaging detection results of the multiple groups of first video images respectively. For example, when the user performs the same forward running action, the imaging effect obtained on the grassland is different from that obtained in the front of the fence, when the user runs forward on the grassland, the video image in the visual field of the user should be rapidly changed along with the forward running action of the user, and the forward running action in the front of the fence is blocked by the fence, so that the imaging of the object in the visual field of the user cannot be rapidly changed along with the forward running action. Therefore, the former needs more computation in image rendering than the latter, which may cause the MTP delay of forward running of the scene on the grass to be larger than the MTP delay of the scene in front of the fence. The multiple groups of first video images are respectively analyzed according to different scenes to respectively obtain imaging detection results, so that more accurate analysis results can be obtained for imaging effects, and the accuracy of imaging detection is improved.
The first video images corresponding to the track segments representing different behaviors in the same scene can be compared and analyzed to obtain the imaging analysis of the same scene in different behaviors, so that the imaging effect of the scene can be more comprehensively detected. The track segments representing different behaviors may be different motion tracks of the terminal to be tested, or different interaction instructions may be sent by the terminal to be tested.
In one example, the system also marks the capturing time for the video image when capturing the video image played in the operation process of the tested terminal; controlling the operation equipment to mark operation time for the behavior track in the process of operating the tested terminal according to the behavior track; identifying a first video image from the video images, comprising: and taking a video image with the capture time between the starting operation time and the ending operation time of the track segment as a first video image. In the embodiment, the capturing time is marked for the video image, the operation time is marked for the behavior track, the first video image is obtained according to the relation between the initial operation time and the capturing time of the track segment, and the first video image can be rapidly identified and extracted by using methods of marking time information and comparing the time information, so that the imaging detection efficiency is improved.
In one example, the system may further extract and analyze the first video images from the video images generated from the same track segment in the same scene according to different video processing techniques, so that the video processed by different video processing techniques may have an objective detection result on the imaging effect of the terminal.
In one example, analyzing the first video image and obtaining an imaging detection result includes: acquiring the initial operation time of the terminal in the track segment; and acquiring the time when the terminal completely displays a video image after the initial operation time, calculating the time difference between the video image and the video image, and taking the time difference as the MTP time delay of the photon motion in the imaging detection result.
In one example, analyzing the first video image and obtaining an imaging detection result further comprises: and detecting the tracking freedom degree of the terminal. The behavior track of the terminal can comprise a series of track sections which are displaced and send weak interaction instructions, and the system analyzes whether the image captured corresponding to each track section generates corresponding change or not by acquiring the first video image corresponding to the series of track sections, so as to judge the tracking freedom degree supported by the terminal. For example, in the prior art, there are terminal devices supporting 3DoF and 6DoF, during detection, the terminal may be controlled by the detected operating device to perform upward and downward rotation, leftward and rightward tilting, forward and backward rotation, forward and backward, left and right, and up and down translation positions, and then the first video image of the track segment is obtained and analyzed, if only the corresponding image of the terminal in the first video image has a corresponding image change when the terminal performs upward and downward rotation, leftward and rightward tilting, and forward and backward rotation, and when the position is performed, the obtained video image does not change, the degree of freedom supported by the detected terminal is 3DoF, and if the corresponding video image of the detected terminal performs corresponding change when the above operation is performed, the degree of freedom supported by the detected terminal is 6 DoF.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A third embodiment of the present invention relates to an imaging detection apparatus, as shown in fig. 5, including:
a triggering module 301, configured to obtain a behavior trace generated for a terminal, where the behavior trace includes at least one of the following: the motion track of the terminal and an interactive instruction sent by the terminal control the detection equipment to operate the terminal according to the behavior track;
a capturing module 302, configured to capture an image of a terminal playing screen when the terminal is operated;
and the analysis module 303 is configured to analyze the image and obtain an imaging detection result.
In one example, as shown in fig. 6, the imaging detection apparatus further includes: the video module 304 is configured to generate a video for the terminal to be tested to play, and send the video to the terminal to be tested; and the tested terminal plays the video in the process of being operated.
In an example, the video module 304 is further configured to extract scene information of a video played by the terminal according to the behavior track, and transmit the behavior track segment and the corresponding scene information to the analysis module 303.
Furthermore, the video module is further configured to extract all scene information in the played video after generating a video for the terminal to be tested to play, the analysis module performs scene recognition on captured video images according to the extracted video scene information, and the captured video image information carries the scene information where the video image is located.
In one example, as shown in fig. 7, the imaging detection apparatus further includes: and the simulation module 305 is used for generating a behavior track simulating the user operation for the terminal.
It should be understood that this embodiment is an example of the apparatus corresponding to the first embodiment, and may be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A fourth embodiment of the present invention relates to an electronic apparatus, as shown in fig. 8, including: at least one processor 401; a memory 402 communicatively coupled to the at least one processor; the memory 402 stores instructions executable by the at least one processor 401, and the instructions are executed by the at least one processor 401 to perform the configuration management method.
Where the memory 402 and the processor 401 are coupled by a bus, which may include any number of interconnected buses and bridges that couple one or more of the various circuits of the processor 401 and the memory 402 together. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The information processed by the processor 401 may be transmitted over a wireless medium through an antenna, which may receive the information and transmit the information to the processor 401.
The processor 401 is responsible for managing the bus and general processing and may provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 402 may be used to store information used by the processor in performing operations.
A fifth embodiment of the present invention relates to an imaging detection system, as shown in fig. 9, including: an electronic device 501, and an operating device 502 communicatively connected to the electronic device.
The electronic device 501 is the same as the fourth embodiment, and the operation device 502 can execute an operation instruction issued by the electronic device.
In this embodiment, the operation device 502 is particularly a robot capable of realizing a preset behavior track.
The electronic device 501 may connect the operation device for transmission of imaging detection instructions through a wired network of LAN, WAN, or VAN, or a wireless network such as bluetooth, NFC network, or the like.
A sixth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. An imaging detection method, comprising:
acquiring a behavior track simulating user operation;
controlling the operation equipment to operate the tested terminal according to the behavior track;
capturing a video image played in the operation process of the tested terminal;
and analyzing the video image and obtaining an imaging detection result.
2. The imaging detection method according to claim 1, wherein the analyzing the video image and obtaining the imaging detection result comprises:
identifying a first video image from the video images; the first video image is a video image played by the tested terminal in the operation process of the operation equipment according to a track segment representing a preset behavior in the behavior track;
and analyzing the first video image to obtain an imaging detection result.
3. The imaging detection method according to claim 2, wherein the analyzing the first video image and obtaining the imaging detection result comprises:
dividing the first video image into a plurality of groups of first video images according to a plurality of scenes contained in the video played by the tested terminal; wherein the plurality of sets of first video images correspond to the plurality of scenes, respectively;
and analyzing the multiple groups of first video images respectively, and obtaining imaging detection results of the multiple groups of first video images respectively.
4. The imaging detection method according to claim 2 or 3, wherein when capturing the video image played in the operation process of the terminal to be detected, the capturing time is marked for the video image; the control operation equipment marks operation time for the behavior track in the process of operating the tested terminal according to the behavior track; the identifying a first video image from the video images comprises:
and taking a video image with the capture time between the starting operation time and the ending operation time of the track segment as the first video image.
5. The imaging detection method according to claim 4, wherein the analyzing the first video image and obtaining the imaging detection result comprises:
acquiring the initial operation time of the terminal in the track segment;
and acquiring the capturing time of the first completely displayed first video image captured by the terminal after the initial operation time, calculating the time difference between the capturing time and the first completely displayed first video image, and taking the time difference as the MTP time delay of the photon motion in the imaging detection result.
6. The imaging detection method according to claim 1, wherein before the controlling operation device operates the terminal under test according to the behavior trace, the method further comprises:
generating a video for the tested terminal to play, and sending the video to the tested terminal; and the tested terminal plays the video in the process of being operated.
7. An imaging detection apparatus, comprising:
the trigger module is used for acquiring a behavior track generated for the terminal, and the behavior track comprises at least one of the following items: the motion track of the terminal and the interactive instruction sent by the terminal control the detection equipment to operate the terminal according to the behavior track;
the capturing module is used for capturing an image of a terminal playing picture when the terminal is operated;
and the analysis module is used for analyzing the image and obtaining an imaging detection result.
8. An electronic device, comprising:
at least one processor;
a memory communicatively coupled to the at least one processor;
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the imaging detection method of any of claims 1 to 6.
9. An imaging inspection system, comprising: the electronic device of claim 8, and an operating device communicatively coupled to the electronic device.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the imaging detection method of any one of claims 1 to 6.
CN202011192413.6A 2020-10-30 2020-10-30 Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium Active CN112312127B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011192413.6A CN112312127B (en) 2020-10-30 2020-10-30 Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011192413.6A CN112312127B (en) 2020-10-30 2020-10-30 Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium

Publications (2)

Publication Number Publication Date
CN112312127A true CN112312127A (en) 2021-02-02
CN112312127B CN112312127B (en) 2023-07-21

Family

ID=74332873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011192413.6A Active CN112312127B (en) 2020-10-30 2020-10-30 Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium

Country Status (1)

Country Link
CN (1) CN112312127B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100225743A1 (en) * 2009-03-05 2010-09-09 Microsoft Corporation Three-Dimensional (3D) Imaging Based on MotionParallax
WO2010143870A2 (en) * 2009-06-08 2010-12-16 주식회사 케이티 Method and apparatus for monitoring video quality
CN106249918A (en) * 2016-08-18 2016-12-21 南京几墨网络科技有限公司 Virtual reality image display packing, device and apply its terminal unit
CN106441810A (en) * 2016-12-16 2017-02-22 捷开通讯(深圳)有限公司 Device and method for detecting delay time of VR (virtual reality) equipment
US20170164026A1 (en) * 2015-12-04 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and device for detecting video data
US20170287097A1 (en) * 2016-03-29 2017-10-05 Ati Technologies Ulc Hybrid client-server rendering in a virtual reality system
CN107820075A (en) * 2017-11-27 2018-03-20 中国计量大学 A kind of VR equipment delayed test devices based on light stream camera
CN107979754A (en) * 2016-10-25 2018-05-01 百度在线网络技术(北京)有限公司 A kind of test method and device based on camera application
CN108307190A (en) * 2017-01-13 2018-07-20 欧普菲有限公司 Method, apparatus for testing display and computer program product
CN208285458U (en) * 2018-06-08 2018-12-25 深圳惠牛科技有限公司 The detection device of display module image quality
CN109147059A (en) * 2018-09-06 2019-01-04 联想(北京)有限公司 A kind of determination method and apparatus for the numerical value that is delayed
KR20190057761A (en) * 2017-11-20 2019-05-29 경기대학교 산학협력단 Apparatus and method for performance analysis of virtual reality head mounted display motion to photon latency
US20190199998A1 (en) * 2016-10-26 2019-06-27 Tencent Technology (Shenzhen) Company Limited Video file processing method and apparatus
CN110036636A (en) * 2016-12-14 2019-07-19 高通股份有限公司 Viewport perceived quality metric for 360 degree of videos
CN110180185A (en) * 2019-05-20 2019-08-30 联想(上海)信息技术有限公司 A kind of time delay measurement method, apparatus, system and storage medium
CN110460831A (en) * 2019-08-22 2019-11-15 京东方科技集团股份有限公司 Display methods, device, equipment and computer readable storage medium
CN111093069A (en) * 2018-10-23 2020-05-01 大唐移动通信设备有限公司 Quality evaluation method and device for panoramic video stream
CN111131735A (en) * 2019-12-31 2020-05-08 歌尔股份有限公司 Video recording method, video playing method, video recording device, video playing device and computer storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100225743A1 (en) * 2009-03-05 2010-09-09 Microsoft Corporation Three-Dimensional (3D) Imaging Based on MotionParallax
WO2010143870A2 (en) * 2009-06-08 2010-12-16 주식회사 케이티 Method and apparatus for monitoring video quality
US20170164026A1 (en) * 2015-12-04 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and device for detecting video data
US20170287097A1 (en) * 2016-03-29 2017-10-05 Ati Technologies Ulc Hybrid client-server rendering in a virtual reality system
CN106249918A (en) * 2016-08-18 2016-12-21 南京几墨网络科技有限公司 Virtual reality image display packing, device and apply its terminal unit
CN107979754A (en) * 2016-10-25 2018-05-01 百度在线网络技术(北京)有限公司 A kind of test method and device based on camera application
US20190199998A1 (en) * 2016-10-26 2019-06-27 Tencent Technology (Shenzhen) Company Limited Video file processing method and apparatus
CN110036636A (en) * 2016-12-14 2019-07-19 高通股份有限公司 Viewport perceived quality metric for 360 degree of videos
CN106441810A (en) * 2016-12-16 2017-02-22 捷开通讯(深圳)有限公司 Device and method for detecting delay time of VR (virtual reality) equipment
CN108307190A (en) * 2017-01-13 2018-07-20 欧普菲有限公司 Method, apparatus for testing display and computer program product
KR20190057761A (en) * 2017-11-20 2019-05-29 경기대학교 산학협력단 Apparatus and method for performance analysis of virtual reality head mounted display motion to photon latency
CN107820075A (en) * 2017-11-27 2018-03-20 中国计量大学 A kind of VR equipment delayed test devices based on light stream camera
CN208285458U (en) * 2018-06-08 2018-12-25 深圳惠牛科技有限公司 The detection device of display module image quality
CN109147059A (en) * 2018-09-06 2019-01-04 联想(北京)有限公司 A kind of determination method and apparatus for the numerical value that is delayed
CN111093069A (en) * 2018-10-23 2020-05-01 大唐移动通信设备有限公司 Quality evaluation method and device for panoramic video stream
CN110180185A (en) * 2019-05-20 2019-08-30 联想(上海)信息技术有限公司 A kind of time delay measurement method, apparatus, system and storage medium
CN110460831A (en) * 2019-08-22 2019-11-15 京东方科技集团股份有限公司 Display methods, device, equipment and computer readable storage medium
CN111131735A (en) * 2019-12-31 2020-05-08 歌尔股份有限公司 Video recording method, video playing method, video recording device, video playing device and computer storage medium

Also Published As

Publication number Publication date
CN112312127B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
US11605308B2 (en) Weld training systems to synchronize weld data for presentation
JP2019071048A (en) System and method for deep learning based hand gesture recognition from first person view point
JP6723738B2 (en) Information processing apparatus, information processing method, and program
JP7042561B2 (en) Information processing equipment, information processing method
US20200242280A1 (en) System and methods of visualizing an environment
US20110109628A1 (en) Method for producing an effect on virtual objects
US20120114183A1 (en) Operation analysis device and operation analysis method
US20210019900A1 (en) Recording medium, object detection apparatus, object detection method, and object detection system
CN108228124B (en) VR vision test method, system and equipment
CN105373011B (en) Detect the real-time emulation system and computer of electro-optical tracking device
CN106679933B (en) Performance test method and system for head display equipment
EP3913478A1 (en) Systems and methods for facilitating shared rendering
US20160189422A1 (en) Process and Device for Determining the 3D Coordinates of an Object
KR101242089B1 (en) Interactive stage system apatrtus and simulation method of the same
JP5664215B2 (en) Augmented reality display system, augmented reality display method used in the system, and augmented reality display program
KR101932525B1 (en) Sensing device for calculating information on position of moving object and sensing method using the same
CN112312127B (en) Imaging detection method, imaging detection device, electronic equipment, imaging detection system and storage medium
KR101850134B1 (en) Method and apparatus for generating 3d motion model
CN112828895B (en) Robot simulation system
TW201937461A (en) Interactive training and testing apparatus
CN108195563B (en) Display effect evaluation method and device of three-dimensional display device and evaluation terminal
Liu et al. A scalable automated system to measure user experience on smart devices
JP4763661B2 (en) Image quality evaluation method, program, and apparatus
EP3882853A1 (en) Image processing method and apparatus
CN117082235A (en) Evaluation system, evaluation method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant