WO2021217467A1 - 一种智能摄像头的测试方法及装置 - Google Patents

一种智能摄像头的测试方法及装置 Download PDF

Info

Publication number
WO2021217467A1
WO2021217467A1 PCT/CN2020/087633 CN2020087633W WO2021217467A1 WO 2021217467 A1 WO2021217467 A1 WO 2021217467A1 CN 2020087633 W CN2020087633 W CN 2020087633W WO 2021217467 A1 WO2021217467 A1 WO 2021217467A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
image
frame
code stream
test result
Prior art date
Application number
PCT/CN2020/087633
Other languages
English (en)
French (fr)
Inventor
肖华熙
陈建
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202080100243.6A priority Critical patent/CN115516431A/zh
Priority to PCT/CN2020/087633 priority patent/WO2021217467A1/zh
Publication of WO2021217467A1 publication Critical patent/WO2021217467A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details

Definitions

  • This application relates to the field of computer technology, and in particular to a test method and device for a smart camera.
  • AI Camera AIC
  • AI artificial intelligence
  • AIC obtains the test stream or test video, runs the AI algorithm to identify the test stream or test video frame by frame, and superimposes the test video frame with the test result obtained by the AI calculation result, frame by frame Send it to the test computer to view the test result on the test computer, compare the test result with the original test stream or test video frame by frame, and get the test conclusion of the AI algorithm.
  • testers can view the test result code stream frame by frame on the test computer, but they cannot synchronize the test result code stream with the test code stream for comparison testing, or pause, fast forward, drag, or locate by time.
  • the test operation is more complicated and not intuitive and flexible enough, and the test efficiency is low.
  • the present application provides a test method and device for a smart camera, which solves the problems in the prior art that the test operation of the smart camera is complicated, not intuitive and flexible enough, and the test efficiency is low.
  • a method for testing a smart camera which is applied to a first device.
  • the method includes: the first device sends a test bit stream to a second device, the test bit stream includes multiple frames of test images; the first device receives The test result code stream sent by the second device, the test result code stream includes multiple frames of test result images, the test result image is obtained by superimposing the recognition result information on the test image corresponding to the test result image, and the recognition result information is the test image of the second device It is obtained through artificial intelligence AI recognition; the first device controls the test code stream and the test result code stream to be displayed on the display together.
  • the first device obtains multiple frames of test images and sends them to the second device for AI calculation.
  • the first device After receiving the test result images sent by the second device, the first device actively controls the test bit stream and the test result bit stream to be shared It is displayed on the display, so that the tester can compare the test code stream and the test result code stream, which is convenient to find the problematic images in the test result code stream, without manual control by the tester, and can improve the intuitiveness and intuitiveness of the smart camera test. Test efficiency, and the test process is simpler.
  • the first device sends the test bit stream to the second device, which specifically includes: the first device obtains the multi-frame test image of the test bit stream frame by frame from the first player or decoder, and the first The player is a player that plays the test code stream; the first device sends multiple frames of test images to the second device frame by frame.
  • the first device may acquire multiple frames of test images frame by frame through the player or decoder, and then send the acquired test images to the second device, so that the first device can receive the test images from the second device in real time.
  • the first device sends the test code stream to the second device, which specifically includes: when the first player plays the test code stream, the first device obtains the target test image from the first player , The target test image is the test image currently played by the first player; the first device sends the target test image to the second device.
  • the first device can send the test image currently played by the first player to the second device in real time, so that the second device performs AI recognition processing based on the test image sent by the first device in real time, and obtains The test result image corresponding to the test image is sent to the first device.
  • the first device sends the test image currently being played to the second device, and returns the test result image to the first device in real time after the second device performs AI recognition for synchronized playback.
  • the test delay is usually small, and the test image and the test result image can be played approximately synchronously.
  • the playback of the test bit stream and the test result bit stream on the first device is strictly synchronized, so that the test The personnel can compare the test image and the test result image corresponding to the test image at the same time, which significantly improves the intuitiveness and flexibility of the smart camera comparison test, and improves the test efficiency.
  • the first device obtains the multi-frame test image of the test bit stream frame by frame from the first player or the decoder, which specifically includes: the first device obtains the multi-frame original test image of the test bit stream; A device performs at least one of image compression, format conversion, or resolution adjustment on the original test image frame by frame to obtain a multi-frame test image of the test code stream.
  • the first device can compress the acquired original test image, and the obtained test image occupies a small storage space, thereby reducing the number of frames of test images sent by the first device to the second device. Delay. Further, the first device may also adjust the resolution of the acquired original test image, so that the resolution of the processed test image is compatible with the AI recognition algorithm of the second device; or, the first device may also perform resolution adjustment on the original test image.
  • the image format conversion for example, the original test image can be adjusted to a format that can be processed by the AI recognition algorithm of the second device to improve the efficiency of the synchronous comparison test.
  • the first device controls the test code stream and the test result code stream to be displayed on the display together, which specifically includes: the first device controls the first player to play the test code stream; After the test result code stream sent by the second device, the second player is controlled to play the test result code stream.
  • the first device controls the first player to play the test stream
  • the first device uses the second player to play the test result stream received in real time, so that the first player and the second player
  • the test code stream and the test result code stream can be played at the same time in the same display interface, which is convenient for testers to compare and view the test code stream and the test result code stream, and improve the efficiency of synchronous comparison test.
  • the first device controls the test code stream and the test result code stream to be displayed on the display together, which specifically includes: the first device controls the test image and synchronous display of the test result image corresponding to the test image.
  • the first device controls the test image and the test result image corresponding to the test image to display synchronously, which can facilitate the tester to perform the comparison test of the test image and the test result image corresponding to the test image frame by frame, or
  • the positioning test image can compare and view the specified test image and the test result image corresponding to the test image, which improves the flexibility of the synchronization test and further improves the test efficiency of the smart camera.
  • the first device controls the test image and the test result image corresponding to the test image to be displayed synchronously, which specifically includes: The second player plays the test result image corresponding to the test image played by the first player.
  • the first device controls the test image by adjusting the playback rate of the first player to play the test stream, that is, adjusting the time interval between adjacent frames of the first player to play the multi-frame test image. It can be displayed synchronously with the corresponding test result image, which is convenient for the tester to compare and view the test image and the corresponding test result image synchronously, and improve the efficiency of the synchronous comparison test.
  • the playback rate of the first player to play the test stream is such that the playback interval of adjacent frames is less than the test delay
  • the playback rate of the first player is reduced to make the playback interval of adjacent frames Greater than the test delay
  • the test delay is the time difference between the first device sending the test image to the second device and the reception of the test result image returned by the second device.
  • the test stream and the test result stream on the first device are played synchronously. That is, the test image and the test result image corresponding to the test image are displayed simultaneously, so that the tester can compare the test image and the test result image corresponding to the test image at the same time, which significantly improves the intuitiveness and test operation of the smart camera comparison test. Flexibility, improve test efficiency.
  • the test result code stream includes a mixed image
  • the mixed image is a spliced image of the test image and the test result image corresponding to the test image.
  • the test result image received by the first device may also be a spliced image of the test image and the test result image corresponding to the test image, so that the first device can simultaneously display the test image and the test image according to the spliced image
  • the corresponding test result image achieves the effect of synchronous test and improves test efficiency.
  • the first device controls the test code stream and the test result code stream to be displayed on the display together, which specifically includes: the first device displays the mixed image in the test result code stream on the display; or, the first device A device performs image segmentation processing on the mixed image in the test result code stream frame by frame to obtain each frame of test image and the test result image corresponding to each frame of test image; the first device controls the first player to play the test image while controlling The second player plays the test result image.
  • the first device may play the received mixed image through a player, that is, display the test image and the test result image corresponding to the test image at the same time, so as to achieve the effect of synchronous testing; in addition, the first device may also Two players are used to respectively play the test image and the test result image corresponding to the test image, which can achieve the effect of synchronous testing and improve the test efficiency.
  • the method further includes: the first device sends hypertext transfer protocol HTTP request information to the second device, where the HTTP request information includes a uniform resource locator URL corresponding to the control operation, and the control operation is used for Instruct the second device to control the test image to be processed.
  • the first device may send a URL corresponding to a specific control operation to the second device to instruct the second device to process the current test image to be processed according to the control operation.
  • control operation may include at least one processing of pausing, starting, jumping to a designated test image, or adjusting the frame rate.
  • the first device sends information about specific control operations for playing the test image to the second device, for example, start playing, pause playing, adjust the playing frame rate, fast forward, rewind, or jump to Specifying the frame and other operations enables the second device to process the test image to be processed according to the control operation, so that the display of the test image and the test result image can be controlled according to the test requirements, and the flexibility of the test operation and the test efficiency are improved.
  • specific control operations for playing the test image for example, start playing, pause playing, adjust the playing frame rate, fast forward, rewind, or jump to Specifying the frame and other operations enables the second device to process the test image to be processed according to the control operation, so that the display of the test image and the test result image can be controlled according to the test requirements, and the flexibility of the test operation and the test efficiency are improved.
  • the HTTP request information further includes input parameters, the input parameters include the frame serial number or the time stamp of the test image to be processed, and the input parameter is used to indicate the test image to be processed corresponding to the control operation.
  • the foregoing request information may also include the frame sequence number or time stamp of the test image corresponding to the control operation, which is used by the second device to locate the specific control operation corresponding to the control operation according to the frame sequence number or time stamp. Therefore, the second device can return the test result image corresponding to the test image specified by the first device, thereby improving the flexibility of the test operation and the test efficiency.
  • the first device controls the test code stream and the test result code stream to be displayed on the display together, which specifically includes: the first device receives the server sent event SSE information sent from the second device, and the SSE information includes the first device.
  • the second device is the location information or time information of the test image currently processed; the first device controls the currently displayed test image and the test result image according to the location information or the time information.
  • the location information refers to the location information of the test image corresponding to the playback progress bar of the test code stream
  • the time information may refer to the timestamp corresponding to the test image.
  • the first device can control the currently displayed test image and test result image through the test image location information or time information reported by the second device, thereby improving the flexibility of the test operation and the test efficiency.
  • the test code stream comes from the storage system of the first device, or the network file system NFS of the first device, or the storage system of the third device.
  • the test code stream obtained by the first device may come from the first device, or the NFS system, or other devices, and the source of the test code stream can be flexibly configured.
  • a method for testing a smart camera includes: a second device receives a test image sent from a first device frame by frame, where the test image is obtained by the first device from a first player or a decoder The currently processed test image; the second device performs artificial intelligence AI recognition on the test image frame by frame, and obtains the test result image corresponding to each frame of the test image, the test result image includes the test image and the recognition result information obtained by AI recognition of the test image; The second device sends a test result code stream to the first device, and the test result code stream includes multiple frames of test result images.
  • the second device acquires the test image currently processed by the player or decoder of the first device in real time, so that the second device performs AI recognition on the test image frame by frame and then sends the corresponding test result image to
  • the first device enables the first device to control the test code stream and the test result code stream to be displayed on the display for testing, thereby improving the intuition of the technicians in the comparison test of the test image and the test result image, and improving the test efficiency.
  • the method further includes: the second device acquires multiple frames of original test images in the test stream; second After the device performs at least one of decompression, format conversion, or resolution adjustment on the original test image frame by frame, a multi-frame test image of the test code stream is obtained.
  • the format that can be processed by the AI recognition algorithm of the second device can be obtained; at the same time, because the second device receives The test image is compressed by the first device, which can reduce the time delay for the second device to receive multiple frames of test images, thereby improving the efficiency of the synchronous comparison test.
  • a method for testing a smart camera includes: a second device acquires a test bit stream, the test bit stream includes multiple frames of test images; the second device performs artificial intelligence on the test images in the test bit stream frame by frame AI recognition, the test result image corresponding to each frame of test image is obtained, the test result image is obtained by superimposing the test image and the recognition result information; the second device splices the test image and the test result image corresponding to the test image frame by frame to obtain the test image Corresponding mixed image; the second device sends a test result code stream to the first device, and the test result code stream includes multiple frames of mixed images.
  • the test result image sent by the second device may also be a spliced image of the test image and the test result image corresponding to the test image, so that the first device can simultaneously display the test image and the test image corresponding to the test image according to the spliced image.
  • the image of the test result can achieve the effect of synchronous test and improve the test efficiency.
  • the second device stitches the test image and the test result image corresponding to the test image frame by frame, which may specifically include: the second device up and down the test image and the test result image corresponding to the test image Splicing or splicing left and right to obtain a mixed image corresponding to the test image.
  • the second device is based on splicing the test image and the test result image corresponding to the test image into one image based on top and bottom splicing or left and right splicing, and after sending it to the first device, the first device can display synchronously. Improve the test efficiency of testers.
  • the method further includes: the second device decodes and/or decapsulates the test bit stream to obtain Multiple frames of test images.
  • the method further includes: the second device receives the hypertext transfer protocol HTTP request information sent from the first device, where the HTTP request information includes the uniform resource locator URL corresponding to the control operation; second The device controls the test image to be processed based on the control operation.
  • the second device receives the HTTP request information sent from the first device, and can obtain the corresponding control operation according to the URL in the request information, so that the second device can process the current test image to be processed according to the control operation , It can improve the flexibility of the first device to control the test operation and improve the test efficiency.
  • control operation may include at least one processing of pausing, starting, jumping to a designated test image, or adjusting the frame rate.
  • the second device determines specific control operations by receiving the HTTP request information sent by the first device, for example, starting, pausing, adjusting the playback frame rate, or jumping to a specified frame, etc., so that the second device is based on
  • the control operation processes the test image to be processed, so that the display of the test image and the test result image can be controlled according to the test requirements, and the flexibility of the test operation and the test efficiency are improved.
  • the HTTP request information further includes input parameters, the input parameters include the frame serial number or the time stamp of the test image to be processed, and the input parameter is used to indicate the test image to be processed corresponding to the control operation.
  • the foregoing request information may also include the frame sequence number or time stamp of the test image corresponding to the control operation, which is used for the second device to locate the specific control operation corresponding to the control operation according to the frame sequence number or time stamp. Therefore, the second device can return the test result image corresponding to the test image specified by the first device, thereby improving the flexibility of the test operation and the test efficiency.
  • the method further includes: the second device sends the server to the first device to send event SSE information, the SSE information includes location information or time information, location information or time of the test image currently processed by the second device The information is used to instruct the first device to control the display of the test image and the test result image.
  • the location information refers to the location information of the test image corresponding to the playback progress bar of the test code stream
  • the time information may refer to the timestamp corresponding to the test image.
  • the second device reports the test image location information or time information to the first device, thereby achieving control of the test image and test result image currently displayed on the first device, and improving the flexibility and efficiency of the test operation.
  • the test code stream comes from the storage system of the first device, or the network file system NFS of the second device, or the storage system of the third device.
  • the test code stream obtained by the second device may come from the first device, or the NFS system, or other devices, and the source of the test code stream can be flexibly configured.
  • a test device for a smart camera includes: a sending module for sending a test bit stream to a second device, the test bit stream including multiple frames of test images; and a receiving module for receiving data from the second device.
  • the test result code stream includes multiple frames of test result images.
  • the test result image is obtained by superimposing the recognition result information on the test image corresponding to the test result image.
  • the recognition result information is the test image performed by the second device. Recognized by artificial intelligence AI; control module, used to control the test code stream and the test result code stream to be displayed on the display together.
  • the test device further includes: an acquisition module for acquiring a multi-frame test image of the test stream from the first player or decoder frame by frame, and the first player is for playing the test stream.
  • the sending module is specifically used to send multiple frames of test images to the second device frame by frame.
  • the acquisition module is specifically used to: acquire a target test image from the first player, and the target test image is currently played by the first player Test image; sending module, specifically used to send the target test image to the second device.
  • the acquisition module is specifically used to: acquire multiple frames of original test images of the test stream from the first player or decoder; perform image compression, format conversion or resolution on the original test images frame by frame After at least one of the adjustments is processed, a multi-frame test image of the test bit stream is obtained.
  • control module is specifically used to: control the first player to play the test code stream; after receiving the test result code stream sent from the second device, control the second player to play the test result code stream .
  • control module is specifically used to control the test image and the test result image corresponding to the test image to be displayed synchronously.
  • control module is specifically used to control the second player to play the test result image corresponding to the test image played by the first player by adjusting the play rate of the test code stream played by the first player.
  • the test result code stream includes a mixed image
  • the mixed image is a spliced image of the test image and the test result image corresponding to the test image.
  • control module is specifically used to: display the mixed image in the test result code stream on the display; or, perform image segmentation processing on the mixed image in the test result code stream frame by frame to obtain each Frame test images and test result images corresponding to each frame of test images; while controlling the first player to play the test images, control the second player to play the test result images.
  • the sending module is further used to send hypertext transfer protocol HTTP request information to the second device, where the HTTP request information includes a uniform resource locator URL corresponding to the control operation, and the control operation is used to indicate the second device
  • the second device controls the test image to be processed.
  • the HTTP request information further includes input parameters, the input parameters include the frame serial number or the time stamp of the test image to be processed, and the input parameter is used to indicate the test image to be processed corresponding to the control operation.
  • the receiving module is further used to: receive the server-sent event SSE information sent from the second device, the SSE information includes location information or time information of the test image currently processed by the second device; the control module, specifically It is also used to control the currently displayed test image and test result image according to location information or time information.
  • the test code stream comes from the storage system of the test device, or the network file system NFS of the test device, or the storage system of the third device.
  • a test device for a smart camera includes: a receiving module for receiving a test image sent from a first device frame by frame, wherein the test image is obtained by the first device from the first player or decoded by the first device.
  • the currently processed test image obtained by the device; AI recognition module, used to perform artificial intelligence AI recognition on the test image frame by frame, and obtain the test result image corresponding to each frame of the test image.
  • the test result image includes the test image and AI recognition of the test image
  • the obtained identification result information; the sending module is used to send the test result code stream to the first device, and the test result code stream includes multiple frames of test result images.
  • the test device further includes: a processing module for performing at least one of decompression, format conversion, or resolution adjustment on the multi-frame original test image acquired by the receiving module frame by frame to obtain Multi-frame test images of test stream.
  • a test device for a smart camera includes: an acquisition module for acquiring a test bit stream, the test bit stream including multiple frames of test images; an AI recognition module, for comparing the test bit stream frame by frame Perform artificial intelligence AI recognition on the test image to obtain the test result image corresponding to each frame of test image.
  • the test result image includes the test image and the recognition result information obtained by AI recognition of the test image; the processing module is used to combine the test image with the test image frame by frame.
  • the test result images corresponding to the test images are spliced to obtain a mixed image corresponding to the test image; the sending module is used to send a test result code stream to the first device, and the test result code stream includes multiple frames of mixed images.
  • the test device further includes a decoding module, which is used to decode and/or decapsulate the test bit stream to obtain multiple frames of test images.
  • the testing device further includes: a receiving module for receiving hypertext transfer protocol HTTP request information sent from the first device, where the HTTP request information includes a uniform resource locator URL corresponding to the control operation;
  • the AI recognition module is specifically also used to control the test image to be processed based on the control operation.
  • the HTTP request information further includes input parameters, the input parameters include the frame serial number or the time stamp of the test image to be processed, and the input parameter is used to indicate the test image to be processed corresponding to the control operation.
  • the sending module is also used to send the event SSE information to the server to the first device.
  • the SSE information includes the location information or time information of the test image currently processed by the second device, which is used for the location information or time information. To instruct the first device to control the display of the test image and the test result image.
  • the test code stream comes from the storage system of the first device, or the network file system NFS of the second device, or the storage system of the third device.
  • a test device for a smart camera includes a processor and a transmission interface, and the processor is configured to execute instructions stored in a memory to execute:
  • the test code stream is sent to the second device through the transmission interface, and the test code stream includes multiple frames of test images; the test result code stream sent from the second device is received through the transmission interface, and the test result code stream includes multiple frames of test result images,
  • the test result image is obtained by superimposing the recognition result information on the test image corresponding to the test result image, and the recognition result information is obtained by the second device performing artificial intelligence AI recognition on the test image; the control test code stream and the test result code stream are displayed on the display together superior.
  • the processor is specifically configured to execute: obtain a multi-frame test image of the test stream from a first player or a decoder frame by frame, and the first player is a player that plays the test stream; The multiple frames of test images are sent to the second device frame by frame.
  • the processor is specifically configured to execute: in the case that the first player plays the test code stream, obtain the target test image from the first player, and the target test image is the current playback of the first player The test image; the target test image is sent to the second device.
  • the processor is specifically configured to execute: obtain multiple frames of original test images of the test bit stream, and perform at least one of image compression, format conversion, or resolution adjustment on the original test images frame by frame. Such processing to obtain the multi-frame test image of the test bit stream.
  • the processor is specifically configured to execute: control the first player to play the test code stream; after receiving the test result code stream sent from the second device, control the first player The second player plays the test result code stream.
  • the processor is specifically configured to execute: control the test image and the test result image corresponding to the test image to be displayed synchronously.
  • the processor is specifically configured to execute: by adjusting the playback rate of the first player to play the test stream, to control the second player to play and the first play The test result image corresponding to the test image played by the device.
  • the test result code stream includes a mixed image
  • the mixed image is a stitched image of the test image and the test result image corresponding to the test image.
  • the processor is specifically configured to execute: display the mixed image in the test result code stream on a display; Image segmentation is performed frame by frame to obtain each frame of test image and the test result image corresponding to each frame of test image; while controlling the first player to play the test image, control the second player to play the test The result image.
  • the processor is further configured to execute: sending Hypertext Transfer Protocol HTTP request information to the second device through the transmission interface, where the HTTP request information includes the uniform resource corresponding to the control operation Locator URL, the control operation is used to instruct the second device to control the test image to be processed.
  • the HTTP request information further includes an input parameter
  • the input parameter includes the frame serial number or the time stamp of the test image to be processed
  • the input parameter is used to indicate that the control operation corresponds to The test image to be processed.
  • the processor is further configured to execute: receiving, through the transmission interface, server-sent event SSE information sent from the second device, where the SSE information includes the test currently processed by the second device Image location information or time information; control the currently displayed test image and test result image according to the location information or the time information.
  • the test code stream comes from the storage system of the test device, or the network file system NFS of the test device, or the storage system of the third device.
  • a test device for a smart camera includes a processor and a transmission interface.
  • the processor is configured to execute instructions stored in a memory to execute the second aspect or any possibility of the second aspect.
  • a test device for a smart camera includes a processor and a transmission interface.
  • the processor is configured to execute instructions stored in a memory to execute the third aspect or any possibility of the third aspect.
  • the test method in the design is provided.
  • a computer-readable storage medium is provided, and instructions are stored in the computer-readable storage medium.
  • the instructions are executed by a computer or a processor, the computer or the processor can execute The test method of the smart camera described in any one of the aspects.
  • An eleventh aspect provides a computer program product, when the computer program product runs on a computer or a processor, the computer or the processor executes the smart camera according to any one of the first aspects Test method.
  • a computer-readable storage medium is provided, and instructions are stored in the computer-readable storage medium.
  • the instructions are executed by a computer or a processor, the computer or the processor can execute such as The test method of the smart camera described in any one of the second aspect.
  • a computer program product which when the computer program product runs on a computer or a processor, causes the computer or the processor to execute the smart camera according to any one of the second aspects Test method.
  • a computer-readable storage medium is provided, and instructions are stored in the computer-readable storage medium.
  • the instructions are executed by a computer or a processor, the computer or the processor can execute The test method of the smart camera described in any one of the third aspect.
  • a computer program product is provided.
  • the computer program product runs on a computer or a processor
  • the computer or the processor executes the smart camera according to any one of the third aspect Test method.
  • any of the smart camera test devices, computer-readable storage media, and computer program products provided above can be used to execute the corresponding methods provided above, and therefore, the beneficial effects that can be achieved can be Refer to the beneficial effects in the corresponding methods provided above, which will not be repeated here.
  • FIG. 1 is a schematic diagram of an application scenario of a method for testing a smart camera provided by an embodiment of the application
  • FIG. 2 is a schematic flowchart of a method for testing a smart camera provided by an embodiment of the application
  • FIG. 3 is a schematic diagram of an image processing effect of AI recognition provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of the processing process of a method for testing a smart camera provided by an embodiment of the application;
  • FIG. 5 is a schematic flowchart of another smart camera testing method provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of the processing process of another smart camera testing method provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of a test device for a smart camera provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram of another smart camera testing device provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of another smart camera testing device provided by an embodiment of the application.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram of a chip provided by an embodiment of the application.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the present embodiment, unless otherwise specified, “plurality” means two or more.
  • the embodiments of the present application provide a testing method and testing device for a smart camera, which can be applied to testing smart camera devices, such as smart cameras and other devices or electronic equipment equipped with smart camera components.
  • testing smart camera devices such as smart cameras and other devices or electronic equipment equipped with smart camera components.
  • the application scenario of this embodiment may be as shown in FIG. 1.
  • the first device refers to a test console, that is, a device or electronic device for performing test operations, viewing test results, and recording test results.
  • the first device may be a computer, a personal computer (PC), a notebook computer, or an ultra-mobile personal computer (UMPC), and it may also be a server, multiple servers, or a cloud computing platform. And at least one of the virtualization centers.
  • a test PC is taken as an example for description.
  • the first device may establish a connection with the second device through a universal serial bus (USB) network channel, or a USB virtual serial port channel, so that the first device and the second device can transmit control information and/or through the connection Stream data, etc.
  • USB universal serial bus
  • the USB channel is a standard communication port for the first device to connect to an external device
  • the USB virtual serial port is a serial port virtualized in the first device through the USB communication device class, and is used to provide communication transmission for the first device 101.
  • the second device refers to a smart camera AIC with a built-in AI algorithm, or other devices or electronic equipment configured with AIC.
  • AIC refers to a camera with AI computing capabilities, which can perform calculations on objects taken by itself or input images through a built-in AI algorithm to identify the target object or detect objects in the image.
  • the second device can communicate and interconnect with the first device through a configured USB interface to transmit audio and video data.
  • the second device may also implement data interaction with the first device through Bluetooth technology, a wireless network, or a wired network, for example, may transmit a video stream based on an Internet protocol (IP).
  • IP Internet protocol
  • the video stream is transmitted through the hypertext transfer protocol (HTTP) or WebSocket protocol in the IP protocol, where the HTTP protocol is a one-way communication protocol, the client initiates an HTTP request, and the server returns data; WebSocket The protocol is a two-way communication protocol. After the client and the server establish a connection, both the client and the server can actively send or receive data to each other.
  • HTTP hypertext transfer protocol
  • WebSocket protocol in the IP protocol
  • the AIC involved in the embodiments of the present application may support the USB Video Class (UVC) protocol, and support video encoding and decoding capabilities.
  • UVC is a protocol standard defined for USB video capture electronic devices.
  • a camera device that provides a USB interface can support the application and implementation of this standard.
  • the application scenario of this application is based on testing the intelligent recognition function of the second device.
  • the input data of the test includes video data or image collection, and the second device can superimpose the AI recognition result on the input video data or image collection to obtain the test result image, that is, the test result image can be the output video data superimposed with the AI recognition result Or an image collection, the test result image may include the test image and AI information obtained by performing AI recognition on the test image.
  • the test principle in the embodiment of this application is: taking the pre-acquired video stream or picture set as the test stream, the AIC performs AI recognition calculation on the test stream frame by frame, and the test image corresponding to each frame of the test stream is obtained in real time.
  • the recognition result of is the test result image.
  • the test PC synchronizes the test result image generated in real time with the original test image, thereby completing the synchronization test and drawing the test conclusion.
  • the embodiment of the present application provides a method for testing a smart camera. As shown in FIG. 2, the method may specifically include the following steps:
  • the first device sends a test code stream to the second device, where the test code stream includes multiple frames of test images.
  • the first device can obtain the test code stream from the test code stream library.
  • the test code stream library is a storage system for storing test code streams.
  • the test code stream library can be a file system deployed on the first device, or a file system stored on the first device or other servers, through a network file system.
  • the network file system (NFS) is shared with the second device, and may also be the storage system of the third device. Therefore, the test bit stream may come from the storage system of the first device, or NFS, or the third device, which is not specifically limited in the embodiment of the present application.
  • NFS is an application system based on User Datagram Protocol (UDP) or Internet Protocol (IP), and is mainly implemented by adopting a remote procedure call (Remote Procedure Call, RPC) mechanism.
  • RPC provides a set of operations for accessing remote files that are independent of the machine, operating system, and low-level transfer protocol.
  • NFS is a network abstraction on top of the file system. The electronic device can use the NFS system to access the file system of the remote client through the network, and the access operation is implemented in a similar way as the electronic device accesses its local file system.
  • the test stream can be a collection of images or a video stream that has been encoded and compressed.
  • video refers to a continuous image sequence, which is composed of continuous frames, and one frame is an image. Due to the persistence effect of the human eye, when a sequence of frames is played at a certain rate, what we see is a video with continuous action. Due to the high similarity of images between consecutive frames, in order to facilitate storage and transmission, the original video can be encoded and compressed to remove redundancy.
  • the video encoding method refers to the method of converting files in the original video format into another video format file through compression technology.
  • the video stream is the data transmitted after encoding the original video file. For example, the H.264 video frame stream generated according to the ITU codec standard H.264, or the H.266 video frame stream generated according to the codec standard H.266.
  • the first device may obtain each frame of the test image of the test bit stream from the video player or the decoder, and send the test image to the second device frame by frame.
  • the first device may directly obtain each frame of test image in a video player that plays the test code stream, for example, the first player. It is also possible to obtain each frame of test image from the decoder that decodes the test bit stream. The first device may specifically select whether to obtain a multi-frame test image from the video player or obtain a multi-frame test image from the decoder according to the interface provided by the video player or the decoder.
  • the first device After the first device acquires each frame of test image frame by frame, it sends it to the second device frame by frame.
  • the original format of each frame of test image in the test bit stream acquired by the first device may be an image in RGB or YUV format, or may be a video frame stream that has been coded and compressed, such as an H.264 video frame stream.
  • the first device obtains each frame of the test image of the test stream from the video player or decoder, and sends each frame of the test image to the second device frame by frame, which may also include:
  • the first device obtains multiple frames of original images of the test code stream frame by frame, and after encoding and compressing each frame of the original image one by one, sends each frame of the compressed test image to the second device.
  • the first device may also send to the second device after adjusting the resolution of multiple frames of original images.
  • the first device may also perform format conversion on multiple frames of original images and send them to the second device.
  • test image obtained after encoding and compressing the original image, adjusting the image resolution, or format conversion is an image that can be received and processed by the AI algorithm in the second device.
  • the image compression format is subject to the image format supported by the second device.
  • the test image can be compressed into a JPEG image format developed by the Joint Photographic Experts Group (JPEG), or a portable network graphics (portable network). graphics, PNG) format, etc.
  • JPEG Joint Photographic Experts Group
  • PNG portable network graphics
  • the first device can choose to compress each frame of the captured original image into JPEG and send it to the second device;
  • the second device supports the PNG format the first device can choose to After each frame of the original image is compressed into PNG, it is sent to the second device.
  • whether to perform image compression on the test image can be determined according to the USB bandwidth resource of the test image transmitted between the first device and the second device.
  • the first device does not compress the multi-frame test image and directly sends the original test image to the second device; when the first device and the second device
  • the first device can compress multiple frames of test images, and then send the compressed test images to the second device, which can save bandwidth resources and reduce the transmission rate. The delay caused by the test image.
  • the first device acquires multiple frames of test images of the test bit stream, and sends each frame of test images to the second device frame by frame, which may also include:
  • the first device obtains multiple frames of original images of the test code stream frame by frame, adjusts the resolution of each frame of the original images one by one, and generates each frame of the test image of the test code stream.
  • the first device may adjust the resolution of each frame of the test image before sending it to the second device frame by frame. For example, if the resolution of the multi-frame test image obtained by the first device is 1280 ⁇ 800, and the resolution of the image that can be processed by the second device is 1024 ⁇ 600, the first device can adjust the resolution of the test image to 1024 frame by frame. After ⁇ 600, send it to the second device.
  • S202 The second device receives a multi-frame test image sent from the first device.
  • the second device performs artificial intelligence AI recognition on the test image frame by frame, and obtains a test result image corresponding to each frame of the test image.
  • the second device receives the test image sent from the first device frame by frame, processes it according to the pre-configured AI recognition algorithm, and obtains a test result image corresponding to each frame of the test image.
  • the test result image and the test image have a one-to-one correspondence, and the corresponding relationship between the test result image and the test image can be determined by the same frame sequence number, video playback progress bar information, or frame time stamp.
  • the test result image may be an image after the test image is superimposed on the AI information, and the AI information is the recognition result information obtained after the test image is calculated by the AI algorithm. Therefore, the test result image may include the test image and AI information obtained by performing AI recognition on the test image, where the AI information may be represented by graphics or labels.
  • a rectangular frame can be added to the image to frame the detected object to form a test result image.
  • the test result image with added object tag can be obtained.
  • the test result image output by the second device after recognizing image 1 is added to image 1.
  • the label is a house
  • the test result image output by the second device after recognizing image 2 is the result of adding labels to image 2 as person 1 and person 2.
  • the AI recognition algorithm in the embodiment of this application can be specifically implemented by a neural network model or a support vector machine algorithm. This application does not limit the specific AI recognition algorithm, nor does it specifically limit the representation method of the test result image. Technicians can choose and set according to actual needs.
  • the second device sends a test result code stream to the first device, where the test result code stream includes multiple frames of test result images.
  • the second device sends each frame of the test result image obtained above to the first device frame by frame.
  • the second device may encode and compress the test result image frame by frame to generate a test result code stream, and the second device may then The test result code stream is sent to the first device frame by frame.
  • S205 The first device receives the test result code stream sent from the second device.
  • the first device receives the result code stream sent from the second device, and obtains each frame of test result image after decoding.
  • S206 The first device controls the test code stream and the test result code stream to be displayed on the display together.
  • the first device When the first device receives the first frame of the test result image, the first device automatically opens another video player such as the second player to play the test result image frame by frame.
  • the first device controls the test code stream and the test result code stream to be displayed on the display together, which specifically includes: the first device controls the first player to play the test code stream, and at the same time, the first device receives the test result sent from the second device After the code stream, the first device controls the second player to play the test result code stream.
  • the test PC when the first device is a test PC and the second device is a smart camera AIC, when the test PC receives the first frame of the test result image of the test result stream sent by the AIC, the test PC plays the test result image through the second player The first frame of test result image, and the received test result stream is played frame by frame.
  • the first player on the test PC obtains the test video to be played from the test stream library, and Get each frame of the test image of the test video frame by frame, and send it to AIC frame by frame after encoding and compression processing.
  • the AIC obtains the test image through decoding, it performs AI recognition on the test image frame by frame, and obtains the test result image of each frame and encodes it before sending it to the test PC.
  • the test PC receives the test result image sent by the AIC
  • the second player on the test PC plays the test result video frame by frame, and the test result video includes multiple frames of test result images.
  • the first player and the second player in the embodiment of this application may be web version video players, or they may be client-side video players installed on the first device. This application does not Make specific restrictions.
  • sending test images, AI calculations, and receiving test result images are all performed frame by frame, and the playback of test video and test result video are all played frame by frame in the sequence of image frames. Therefore, the first device uses two players to play the test code stream and the test result code stream frame by frame, and the first player and the second player can simultaneously play the test code stream and the test result code stream on the same display interface.
  • the test result code stream is convenient for testers to compare and view the test code stream and the test result code stream, and improve the efficiency of synchronous comparison test.
  • the first device may obtain the currently played test image from the first player as the target test image when the first player is playing the test code stream, and send the target test image to the second device .
  • the target test image is the test image currently played by the first player acquired by the first device, that is, the first device can send the test image currently played by the first player to the second device in real time, so that the second device can real-time
  • the test image sent by the first device is subjected to AI recognition processing, and the test result image corresponding to the obtained test image is sent to the first device in real time.
  • the first device displays each frame of the test result image frame by frame, and while displaying the test result image, it displays the test image corresponding to the test result image.
  • the second device performs AI recognition processing on each frame of test images, and the test delay of each frame of test result images received by the first device can be ignored, or
  • the playback of the test bit stream and the test result bit stream on the first device is synchronized, that is, the test image and the test image
  • the corresponding test result images are displayed synchronously, so that testers can compare and view the test image and the corresponding test result image at the same time, which significantly improves the intuitiveness of the smart camera comparison test and the flexibility of the test operation, and improves the test efficiency.
  • the first device can control the playback of the test result image of the second player by controlling the playback control of the first player to the test image.
  • Technicians can control the image processing of the second device by controlling the player running on the first device, for example, pause, play, fast forward or adjust the frame rate, etc., to control the image processing of the second device, so as to achieve the test video and test results Real-time synchronization of video.
  • the test PC receives the video pause operation clicked by the technician, and the test PC pauses sending test images to AIC, then AIC pauses image processing; when the video playback operation is clicked, AIC continues to receive test images, performs image processing and sends them
  • the test results are sent to the test PC to achieve playback synchronization; the technician sets the playback speed of the test PC, for example, set the frame rate to 15Hz, then the AIC terminal receives the test image at 15 frames per second, so that the test video playback speed set on the test PC is The playback speed of the test result video.
  • the first device may control the test image and the test result image corresponding to the test image to be displayed synchronously in the following specific manner: the first device adjusts the playback rate of the first player to play the test stream, The second player is controlled to play the test result image corresponding to the test image played by the first player.
  • the first device can control the playback rate at which the first player plays the test stream, so that when the playback interval between adjacent frames of the first player playing the test stream is less than the above-mentioned test time delay, the first device can reduce The playback rate of the first player.
  • the manner in which the first device reduces the playback rate of the first player may be manually controlled, or the playback rate may be automatically adjusted by the first device according to a preset condition.
  • the initial playback rate of the first player on the test PC is 30 frames per second, that is, the time interval for the first player to play adjacent frames of the test stream is 0.33 seconds, and it is determined by detection that the first device sends
  • the time delay from one frame of test image to the receipt of the test result image corresponding to the test image is 1 second.
  • the first player can be adjusted to reduce the playback rate. rate.
  • the play interval of adjacent frames of the first player playing the test stream is much longer than the test delay At this time, the test image played by the first player and the test result image played by the second player are synchronized.
  • the test stream and the test result stream on the first device are played synchronously That is, the test image and the test result image corresponding to the test image are displayed synchronously, so that the tester can compare the test image and the test result image corresponding to the test image at the same time, which significantly improves the intuitiveness and test of the smart camera comparison test
  • the flexibility of operation improves the test efficiency.
  • the embodiment of the present application also provides another smart camera test method, that is, in step S203, after the second device performs AI recognition on the test image to obtain the test result image corresponding to the test image, as shown in FIG. 5, the test method It can also include:
  • S501 The second device splices the test image and the test result image corresponding to the test image frame by frame to obtain a mixed image corresponding to the test image.
  • the method of image splicing may be up and down splicing
  • the second device may perform up and down splicing of each frame of test image and the test result image corresponding to each frame of test image one by one to obtain a mixed image of each frame of test image .
  • the test image is displayed on the top, and the test result image corresponding to the test image is displayed on the bottom to obtain a mixed image.
  • the test result corresponding to the test image is displayed below, and the test image is displayed below.
  • the second device can stitch each frame of test image and the test result image corresponding to each frame of test image one by one to obtain a mixed image of each frame of test image. For example, the test image is displayed on the left, and the test result image corresponding to the test image is displayed on the right to obtain a mixed image. Or vice versa, the test result corresponding to the test image is displayed on the left, and the test image is displayed on the right.
  • the second device sends a test result code stream to the first device, where the test result code stream includes a multi-frame mixed image.
  • the second device may send the mixed image obtained by the above processing to the first device frame by frame, or may send the mixed image of multiple frames to the first device after being encoded and compressed.
  • the first device controls the test code stream and the test result code stream to be displayed on the display together, which may specifically include the following two display modes:
  • the first device displays the mixed image in the test result code stream on the display.
  • the first device can play frame by frame according to the received multi-frame mixed image, that is, the first device displays the multi-frame mixed image in the test result code stream frame by frame, and the mixed image includes the test image and The test result image, and the test image and the test result image are in a one-to-one correspondence, the tester can perform a synchronous comparison test frame by frame, and complete the real-time synchronous comparison of the test video and the test result video. Since each frame of test image and the corresponding test result image are spliced and displayed in one image, it is convenient for technicians to perform comparison tests at the same time, and the test efficiency can be improved.
  • AIC obtains the test stream from the test stream library, and after decapsulation or decoding, the original test image is obtained , After AI recognition of the test image, the test result image is obtained.
  • AIC stitches the test image and the test result image frame by frame to obtain a mixed image.
  • the AIC sends the multi-frame mixed image to the test PC, so that the test PC can play frame by frame according to the received mixed image to complete the real-time comparison test.
  • Step 1 The first device performs image segmentation processing on the mixed image in the test result code stream frame by frame to obtain each frame of test image and a test result image corresponding to each frame of test image.
  • the first device can also process the received multi-frame mixed image to obtain each frame of test image and the test result image corresponding to each frame of test image, and then separately compare the test video and The test result video is played frame by frame to compare the test results.
  • the test video includes multiple frames of test images
  • the test result video includes multiple frames of test result images.
  • the first device may perform image segmentation according to the above-mentioned image splicing method, and then split each frame of mixed image into each frame of test image and test result image.
  • the mixed image is spliced left and right (the test image is on the left, The test result image is on the right), the first device can intercept the left half of the mixed image to generate a test image, and the right half of the mixed image to generate a test result image, and then play them separately to achieve synchronization testing.
  • Step 2 While the first device controls the first player to play the test image, it controls the second player to play the test result image.
  • the first device controls the two players to play separately according to the test image obtained by the above segmentation and the corresponding test result image; and while the first player plays a certain test image, controls the second player to play the corresponding test image The image of the test result.
  • testers can perform synchronous comparison tests in real time, and the test is intuitive, and test problems can be found in real time, thereby improving test efficiency.
  • test code stream obtained by the second device from the test code stream library is in a package format
  • the second device cannot directly process the packaged data. Before acquiring each frame of test image, it needs to The test code stream is decapsulated.
  • the test stream obtained from the test stream library is in a moving picture experts group (MP4) format
  • MP4 is a packaging standard for audio and video information.
  • the second device needs to decapsulate the test bit stream in MP4 format to obtain the image part therein, for example, decapsulate to obtain the H.264 bare stream. If the test bit stream itself is in the H.264 bare stream format, no decapsulation processing is required, and each frame of test image can be obtained based on the H.264 bare stream.
  • the test code stream obtained by the second device from the test code stream library is in an encoded and compressed format. Before each frame of test image is obtained, the test code stream needs to be decoded to obtain each test code stream. Frame test image.
  • test bit stream itself is in the H.264 bare stream format
  • no decoding processing is required, and each frame of test image can be obtained based on the H.264 bare stream.
  • code stream is in JPEG format or PNG format, it needs to be decompressed according to the compression rules to obtain each frame of test image.
  • the technician needs to control the playback speed and progress of the test video, for example, by operating the video player on the first device to pause and play the test video , Fast forward, rewind, drag or slow test frame by frame, etc. Therefore, the first device can establish communication with the second device and transmit control information through a predefined control protocol, thereby controlling the processing process of the second device, and further controlling the playback of the test video and test result video on the first device Synchronize. For example, control the second device to start processing and pause processing of video frames, control the processing frame rate of the second device, or locate the video frame currently processed by the second device. Therefore, the test method can also include:
  • the first device sends hypertext transfer protocol HTTP request information to the second device, where the request information may include a uniform resource locator (URL), which is used by the second device to obtain specific control operations according to the URL to achieve Control of the test image to be processed.
  • HTTP request information may include a uniform resource locator (URL), which is used by the second device to obtain specific control operations according to the URL to achieve Control of the test image to be processed.
  • URL uniform resource locator
  • the second device After receiving the HTTP request information, the second device can control the test image to be processed based on the control operation.
  • the first device can operate pause, play, fast-forward, fast-rewind, control the playback speed, jump to a specified video frame, or adjust the frame rate on the first player.
  • the first device may send corresponding HTTP request information to the second device in response to the operation control of the video playback by the technician, so as to control the image processing of the second device, thereby realizing the playback control of the mixed code stream.
  • Technicians can flexibly control the video test progress by operating the test PC, so the real-time comparison test is intuitive and can effectively improve the test efficiency.
  • the first device and the second device have a built-in web server (web server), such as an HTTP server, and the first device and the second device may use the HTTP protocol to transmit control signaling for network communication.
  • the HTTP protocol can be used to transmit web page information of the World Wide Web (WWW) service, and the HTTP protocol is transmitted in plain text.
  • WWW World Wide Web
  • the test PC can send HTTP request information to the AIC, and control the processing of video frames on the AIC through the control signaling included in the HTTP request information.
  • different uniform resource locator URL
  • the request information may include the URL corresponding to the specific control operation, which is used by the AIC to obtain the specific control operation according to the URL address, so as to realize the playback control of the resulting code stream.
  • the request information may include input parameters, which are used to identify the information of the test image corresponding to the aforementioned control operation.
  • the test PC may send request information in JavaScript object notation (JSON) format to the AIC.
  • the request information may include the URL of the control operation and may also carry input parameters.
  • the input parameters may include a test PC request
  • the test image information to be played in the next frame for example, the input parameter may include a frame sequence or a frame time stamp.
  • AIC determines the control operation according to the URL in the request information, and determines the test image information requested by the test PC to be played according to the input parameters, and then returns the response information in JSON format to the test PC.
  • the response information may include output parameters. , That is, the AIC successfully receives the response of the control information sent by the test PC, or the response of the failed reception.
  • AIC adjusts the currently processed test image according to the received control operation, and then sends the test result image to the test PC, thereby realizing the adjustment of the test video and the current playback frame of the test result video, so that the test video and the test result are adjusted.
  • the video realizes real-time synchronous comparison and viewing.
  • the JSON format is a standard specification for data transmission format, a lightweight data storage and transmission format that has nothing to do with the development language.
  • the HTTP request information sent by the test PC to the AIC includes a control operation to jump to a designated test image
  • the HTTP request information may also include the frame sequence number of the designated test image to be processed corresponding to the control operation.
  • the AIC can determine the frame serial number of the designated test image based on the control operation and input parameters and adjust the currently processed test image to the designated test image.
  • the foregoing request information may also include the frame sequence number or time stamp of the test image corresponding to the control operation, for the second device to locate the specific test image corresponding to the control operation according to the frame sequence number, or The specific test image is determined according to the time stamp and the total duration of the test bit stream, so that the second device can return the test result image corresponding to the test image specified by the first device, thereby improving the flexibility and efficiency of the test operation.
  • the network communication between the first device and the second device may also use a server sent event (server sent svent, SSE) as a bearer protocol.
  • server sent event server sent svent, SSE
  • the second device may send event SSE information to the first device sending server, and the SSE information may include the location information or time information of the test image currently processed by the second device; then the first device receives the SSE information sent from the second device, according to The location information or time information controls the currently displayed test image and test result image.
  • SSE is based on the HTTP protocol, which can push information to the web browser through streaming information
  • SSE is a one-way transmission channel sent by the server to the receiving browser. What SSE sends is not a one-time data packet, but a data stream, which can be sent continuously.
  • the sending-end electronic device sends a data stream
  • the receiving-end electronic device does not close the connection, and continues to receive a new data stream sent by the sending-end electronic device (such as a server).
  • video playback means continuously sending a sequence of video frames.
  • the location information of the test image refers to the location information corresponding to the playback progress bar where the specified test image is located during the playback of the test stream.
  • Time information refers to the timestamp corresponding to the test image.
  • the second device may be set to report the currently processed test image information to the first device every preset time, that is, through the SSE, that is, the second device periodically reports the currently processed test image information to the first device.
  • the AIC can periodically report the position information of the currently processed test image, or the serial number or time stamp of the test image in the frame sequence to the test PC, so that the test PC can report the time stamp and pre-configured test video according to the AIC report
  • To determine the current test image information or determine the current test image information based on the frame serial number and total frame number, or determine the currently processed test image information based on the location information of the test image and the total progress of the test bit stream. Thereby adjusting the current playing video frame of the video player on the test PC to achieve real-time synchronous testing.
  • the first device establishes a communication connection with the second device, and periodically obtains the location information or time information of the test image processed by the second device to control the test image and the test result image currently displayed by the first device. That is, through the control protocol, the video frame played by the first device and the video frame of the AI calculation by the second device are synchronized, so that the technician can flexibly adjust the playback progress according to the test needs, and the synchronization comparison test is more intuitive, and the test operation is more flexible. Performance and test efficiency.
  • the first device or the second device, etc. include hardware structures and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • the embodiments of the present application can divide the first device or the second device into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module. middle.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. It should be noted that the division of modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 7 shows a schematic structural diagram of a test device for a smart camera.
  • the test device may be a first device or a chip or a system on a chip in the first device, or other combination devices, components, etc. that can realize the functions of the first device, and the test device may be used to execute the first device involved in the above embodiments.
  • the function of the device may be a first device or a chip or a system on a chip in the first device, or other combination devices, components, etc. that can realize the functions of the first device, and the test device may be used to execute the first device involved in the above embodiments. The function of the device.
  • the testing device 700 may include:
  • the sending module 701 is configured to send a test code stream to the second device, where the test code stream includes multiple frames of test images.
  • the receiving module 702 is configured to receive the test result code stream sent from the second device, the test result code stream includes multiple frames of test result images, the test result images are obtained by superimposing the recognition result information on the test images corresponding to the test result images, and the recognition result The information is obtained by the second device performing artificial intelligence AI recognition on the test image.
  • the control module 703 is used to control the test code stream and the test result code stream to be displayed on the display together.
  • the testing device 700 may further include:
  • the acquiring module is used to acquire the multi-frame test image of the test code stream frame by frame from the first player or the decoder, and the first player is a player that plays the test code stream.
  • the sending module 701 is specifically configured to send multiple frames of test images to the second device frame by frame.
  • the acquisition module is specifically used to: acquire a target test image from the first player, and the target test image is currently played by the first player Test image.
  • the sending module 701 is specifically configured to send the target test image to the second device.
  • the acquisition module is specifically used to: acquire multiple frames of original test images of the test stream from the first player or decoder; perform image compression, format conversion or resolution on the original test images frame by frame After at least one of the adjustments is processed, a multi-frame test image of the test bit stream is obtained.
  • control module 703 is specifically configured to: control the first player to play the test code stream; after receiving the test result code stream sent from the second device, control the second player to play the test result code flow.
  • control module 703 is specifically configured to: control the test image and the test result image corresponding to the test image to be displayed synchronously.
  • control module 703 is specifically configured to: control the second player to play the test result image corresponding to the test image played by the first player by adjusting the playback rate of the test stream played by the first player .
  • the test result code stream includes a mixed image
  • the mixed image is a spliced image of the test image and the test result image corresponding to the test image.
  • control module 703 is also specifically used to: display the mixed image in the test result code stream on the display; or, perform image segmentation processing on the mixed image in the test result code stream frame by frame to obtain Each frame of test image and a test result image corresponding to each frame of test image; while controlling the first player to play the test image, control the second player to play the test result image.
  • the sending module 701 is further configured to send Hypertext Transfer Protocol HTTP request information to the second device, where the HTTP request information includes the uniform resource locator URL corresponding to the control operation, and the control operation is used to indicate The second device controls the test image to be processed.
  • the HTTP request information further includes input parameters, the input parameters include the frame serial number or the time stamp of the test image to be processed, and the input parameter is used to indicate the test image to be processed corresponding to the control operation.
  • the receiving module 702 is further configured to: receive server-sent event SSE information sent from the second device, where the SSE information includes location information or time information of the test image currently processed by the second device; and the control module, Specifically, it is also used to control the currently displayed test image and test result image according to location information or time information.
  • the test code stream comes from the storage system of the test device, or the network file system NFS of the test device, or the storage system of the third device.
  • test device 800 may include:
  • the receiving module 801 is configured to receive the test image sent from the first device frame by frame, where the test image is the currently processed test image obtained by the first device from the first player or decoder.
  • the AI recognition module 802 is used to perform artificial intelligence AI recognition on the test image frame by frame to obtain a test result image corresponding to each frame of the test image.
  • the test result image includes the test image and recognition result information obtained by performing AI recognition on the test image.
  • the sending module 803 is configured to send a test result code stream to the first device, where the test result code stream includes multiple frames of test result images.
  • the testing device 800 further includes:
  • the processing module is configured to perform at least one of decompression, format conversion, or resolution adjustment on the multi-frame original test image obtained by the receiving module frame by frame to obtain a multi-frame test image of the test bit stream.
  • test device 900 may include:
  • the obtaining module 901 is used to obtain a test code stream, and the test code stream includes multiple frames of test images.
  • the AI recognition module 902 is used to perform artificial intelligence AI recognition on the test images in the test bit stream frame by frame to obtain the test result image corresponding to each frame of the test image.
  • the test result image includes the test image and the recognition obtained by AI recognition of the test image Result information.
  • the processing module 903 is configured to splice the test image and the test result image corresponding to the test image frame by frame to obtain a mixed image corresponding to the test image.
  • the sending module 904 is configured to send a test result code stream to the first device, where the test result code stream includes multiple frames of mixed images.
  • the testing device 900 may further include:
  • the decoding module is used to decode and/or decapsulate the test bit stream to obtain multiple frames of test images.
  • the testing device 900 may further include:
  • the receiving module is used to receive the Hypertext Transfer Protocol HTTP request information sent from the first device, where the HTTP request information includes the uniform resource locator URL corresponding to the control operation; the AI identification module is specifically also used for processing based on the control operation Test image for control.
  • the HTTP request information further includes input parameters, the input parameters include the frame serial number or the time stamp of the test image to be processed, and the input parameter is used to indicate the test image to be processed corresponding to the control operation.
  • the sending module 904 is further configured to send the event SSE information to the server to the first device.
  • the SSE information includes location information or time information, location information or time information of the test image currently processed by the second device. It is used to instruct the first device to control the display of the test image and the test result image.
  • the test code stream comes from the storage system of the first device, or the network file system NFS of the test device 900, or the storage system of the third device.
  • the above-mentioned transmitting module may be a transmitter, which may include an antenna and a radio frequency circuit, etc.
  • the processing module may be a processor, such as a baseband chip.
  • the sending module may be a radio frequency unit
  • the processing module may be a processor.
  • the sending module may be an output interface of the chip system
  • the processing module may be a processor of the chip system, such as a central processing unit (CPU).
  • CPU central processing unit
  • the specific execution process and embodiments in the above-mentioned apparatus 700 may refer to the steps and related descriptions performed by the first apparatus in the above-mentioned method embodiment, and the specific execution process and embodiments in the above-mentioned apparatus 800 or the apparatus 900 You can refer to the steps performed by the second device in the foregoing method embodiments and related descriptions, and the technical problems solved and the technical effects brought about can also refer to the content described in the foregoing embodiments, which will not be repeated here.
  • the testing device is presented in the form of dividing various functional modules in an integrated manner.
  • the "module” herein may refer to a specific circuit, a processor and memory that executes one or more software or firmware programs, an integrated logic circuit, and/or other devices that can provide the above-mentioned functions.
  • the testing device can take the form shown in Figure 10 below.
  • FIG. 10 is a schematic structural diagram of an exemplary electronic device 1000 shown in an embodiment of the application.
  • the electronic device 1000 may be the first device or the second device in the foregoing embodiment, and is used to execute the smart camera in the foregoing embodiment. Test method.
  • the electronic device 1000 may include at least one processor 1001, a communication line 1002, and a memory 1003.
  • the processor 1001 may be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the communication line 1002 may include a path to transmit information between the above-mentioned components, and the communication line may be, for example, a bus.
  • the memory 1003 can be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions
  • the dynamic storage device can also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disk storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by a computer Any other media accessed, but not limited to this.
  • the memory can exist independently, and is connected to the processor through the communication line 1002.
  • the memory can also be integrated with the processor.
  • the memory provided by the embodiment of the present application is usually a non-volatile memory.
  • the memory 1003 is used to store computer program instructions involved in executing the solutions of the embodiments of the present application, and the processor 1001 controls the execution.
  • the processor 1001 is configured to execute computer program instructions stored in the memory 1003, so as to implement the method provided in the embodiment of the present application.
  • the computer program instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
  • the processor 1001 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 10.
  • the electronic device 1000 may include multiple processors, such as the processor 1001 and the processor 1007 in FIG. 10. These processors can be single-CPU (single-CPU) processors or multi-core (multi-CPU) processors.
  • the processor here may refer to one or more devices, circuits, and/or processing cores for processing data (for example, computer program instructions).
  • the electronic device 1000 may further include a communication interface 1004.
  • the electronic device can send and receive data through the communication interface 1004, or communicate with other devices or a communication network.
  • the communication interface 1004 can be, for example, an Ethernet interface, a radio access network (RAN), or a wireless local area interface (wireless local area). networks, WLAN) or USB interface, etc.
  • the electronic device 1000 may further include an output device 1005 and an input device 1006.
  • the output device 1005 communicates with the processor 1001 and can display information in a variety of ways.
  • the output device 1005 may be a liquid crystal display (LCD), a light emitting diode (LED) display device, a cathode ray tube (CRT) display device, or a projector (projector) Wait.
  • the input device 1006 communicates with the processor 1001, and can receive user input in a variety of ways.
  • the input device 1006 may be a mouse, a keyboard, a touch screen device, or a sensor device.
  • the electronic device 1000 may be a desktop computer, a portable computer, a web server, a personal digital assistant (PDA), a mobile phone, a tablet computer, a wireless terminal device, an embedded device, a smart camera, or a smart camera as shown in Figure 10. Similar structure equipment.
  • PDA personal digital assistant
  • the embodiment of the present application does not limit the type of the electronic device 1000. If it is used to implement the method of the second device in the foregoing embodiment, the electronic device 1000 needs to be equipped with a smart camera.
  • the processor 1001 in FIG. 10 may invoke the computer program instructions stored in the memory 1003 to cause the electronic device 1000 to execute the method in the foregoing method embodiment.
  • the function/implementation process of each processing module in FIG. 7, FIG. 8 or FIG. 9 may be implemented by the processor 1001 in FIG. 10 calling computer program instructions stored in the memory 1003.
  • the function/implementation process of the control module 703 and the acquisition module in FIG. 7 can be implemented by the processor 1001 in FIG. 10 calling a computer execution instruction stored in the memory 1003.
  • the function/implementation process of the AI identification module 802 and the processing module in FIG. 8 can be implemented by the processor 1001 in FIG. 10 calling a computer execution instruction stored in the memory 1003.
  • the function/implementation process of the acquisition module 901, the AI identification module 902, the processing module 903, or the decoding module in FIG. 9 can be implemented by the processor 1001 in FIG. 10 calling a computer execution instruction stored in the memory 1003.
  • a computer-readable storage medium including instructions is also provided.
  • the foregoing instructions can be executed by the processor 1001 of the electronic device 1000 to complete the smart camera testing method of the foregoing embodiment. Therefore, the technical effects that can be obtained can refer to the above-mentioned method embodiments, which will not be repeated here.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • a software program it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application are generated in whole or in part.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • FIG. 11 is a schematic structural diagram of a chip provided by an embodiment of the application.
  • the chip 1100 includes one or more processors 1101 and an interface circuit 1102.
  • the chip 110 may further include a bus 1103.
  • the processor 1101 may be an integrated circuit chip with signal processing capabilities. In the implementation process, the steps of the foregoing method can be completed by an integrated logic circuit of hardware in the processor 1101 or instructions in the form of software.
  • the above-mentioned processor 1101 may be a general-purpose processor, a digital communicator (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components .
  • DSP digital communicator
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the methods and steps disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the interface circuit 1102 is used for sending or receiving data, instructions or information.
  • the processor 1101 may use the data, instructions or other information received by the interface circuit 1102 to perform processing, and may send the processing completion information through the interface circuit 1102.
  • the chip 1100 further includes a memory.
  • the memory may include a read-only memory and a random access memory, and provides operation instructions and data to the processor.
  • a part of the memory may also include a non-volatile random access memory (Non-Volatile Random Access Memory, NVRAM).
  • NVRAM Non-Volatile Random Access Memory
  • the memory stores executable software modules or data structures
  • the processor can execute corresponding operations by calling operation instructions stored in the memory (the operation instructions may be stored in the operating system).
  • the chip 1100 may be used in the test device (including the first device and the second device) involved in the embodiment of the present application.
  • the interface circuit 1102 may be used to output the execution result of the processor 1101.
  • processor 1101 and the interface circuit 1102 can be implemented either through hardware design, through software design, or through a combination of software and hardware, which is not limited here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本申请提供一种智能摄像头的测试方法及装置,涉及计算机技术领域,用于解决现有技术中对智能摄像头的测试操作较复杂,测试操作不够直观和灵活,测试效率较低的问题。该方法包括:第一装置将测试码流发送给第二装置,测试码流包括多帧测试图像;第一装置接收来自第二装置发送的测试结果码流,测试结果码流包括多帧测试结果图像,测试结果图像为与测试结果图像对应的测试图像叠加识别结果信息得到的,识别结果信息为第二装置对测试图像进行人工智能AI识别得到的;第一装置控制测试码流和测试结果码流共同显示在显示器上。

Description

一种智能摄像头的测试方法及装置 技术领域
本申请涉及计算机技术领域,尤其涉及一种智能摄像头的测试方法及装置。
背景技术
目前智能摄像头(AI Camera,AIC)的应用日趋广泛,其通过内置的人工智能(Artificial Intelligence,AI)算法进行计算,例如,通过AI计算可以检测和识别出图像中被拍摄的对象,或者对拍摄图像进行图像分割。因此,在AIC领域,通常开发者对AI算法完成参数训练,并将AI算法更新到AIC中后,还需要在AIC中运行更新后的AI算法,以对更新后的AI算法进行测试,确认更新后的AI算法是否能准确识别出被拍摄的对象,或者是否解决了预期的问题。
现有的测试技术中,AIC获得测试码流或者测试视频,运行AI算法对测试码流或测试视频逐帧进行识别计算后,将测试视频帧与AI计算结果得到的测试结果叠加后,逐帧发送给测试计算机,从而在测试计算机上查看测试结果,将测试结果与原始的测试码流或者测试视频逐帧进行比对,得到对AI算法的测试结论。
上述测试技术,测试人员可以在测试计算机上逐帧查看测试结果码流,但无法将测试结果码流与测试码流同步对比测试,也无法通过暂停、快进、拖动或者按时间定位等方式进行测试,测试操作较复杂且不够直观和灵活,测试效率较低。
发明内容
本申请提供一种智能摄像头的测试方法及装置,解决了现有技术中智能摄像头的测试操作较复杂且不够直观和灵活,测试效率较低的问题。
为达到上述目的,本申请采用如下技术方案:
第一方面,提供一种智能摄像头的测试方法,应用于第一装置,该方法包括:第一装置将测试码流发送给第二装置,测试码流包括多帧测试图像;第一装置接收来自第二装置发送的测试结果码流,测试结果码流包括多帧测试结果图像,测试结果图像为与测试结果图像对应的测试图像叠加识别结果信息得到的,识别结果信息为第二装置对测试图像进行人工智能AI识别得到的;第一装置控制测试码流和测试结果码流共同显示在显示器上。
上述技术方案中,第一装置获取多帧测试图像并发送给第二装置进行AI计算,第一装置接收到第二装置发送的测试结果图像后,主动控制将测试码流和测试结果码流共同显示在显示器上,从而测试人员可以根据测试码流和测试结果码流进行对比测试,便于发现测试结果码流中存在问题的图像,无需测试人员手动控制,能够提高对智能摄像头测试的直观性和测试效率,且测试过程更为简单。
在一种可能的设计方式中,第一装置将测试码流发送给第二装置,具体包括:第一装置从第一播放器或者解码器逐帧获取测试码流的多帧测试图像,第一播放器为播放测试码流的播放器;第一装置逐帧将多帧测试图像发送给第二装置。
上述可能的实现方式中,第一装置可以通过播放器或者解码器逐帧获取多帧测试图像后将获取的测试图像发送给第二装置,从而第一装置可以实时接收来自第二装置对测试图像进行AI识别后得到的测试结果图像,并且第一装置通过控制测试码流和测试结果码流共同显示在显示器上进行测试,从而提高了技术人员对测试图像和测试结果图像对比测试的直观性,提高测试效率。
在一种可能的设计方式中,第一装置将测试码流发送给第二装置,具体包括:在第一播放器播放测试码流的情况下,第一装置从第一播放器获取目标测试图像,目标测试图像为第一播放器当前播放的测试图像;第一装置将目标测试图像发送给第二装置。
上述可能的实现方式中,第一装置可以将当前第一播放器播放的测试图像实时发送给第二装置,从而使得第二装置实时根据第一装置发送的测试图像进行AI识别处理,并将得到的测试图像对应的测试结果图像发送给第一装置。第一装置将当前正在播放的测试图像发送给第二装置,在第二装置进行AI识别之后实时将测试结果图像返回给第一装置进行同步播放。从第一装置发送测试图像给第二装置到第一装置接收到第二装置返回的测试结果图像之间存在一定测试时延,该时延可以理解为测试图像和测试结果图像的播放时延,由于处理一帧图像所需的时间很短,该测试时延通常较小,可以实现测试图像与测试结果图像的近似同步播放。在一种可能的情况中,例如当该测试时延小于第一播放器逐帧播放测试图像的时间间隔时,第一装置上测试码流和测试结果码流的播放是严格同步的,从而测试人员可以同时对测试图像和该测试图像对应的测试结果图像进行对比查看,显著提高对智能摄像头对比测试的直观性和灵活性,提高测试效率。
在一种可能的设计方式中,第一装置从第一播放器或者解码器逐帧获取测试码流的多帧测试图像,具体包括:第一装置获取测试码流的多帧原始测试图像,第一装置逐帧对原始测试图像进行图像压缩、格式转换或者分辨率调整中的至少一种处理,得到测试码流的多帧测试图像。
上述可能的实现方式中,第一装置可以对获取的原始测试图像进行压缩处理,得到的测试图像占用的存储空间较小,从而能够减小第一装置向第二装置发送多帧测试图像的发送延时。进一步的,第一装置还可以对获取的原始测试图像进行分辨率调整,以使得处理后的测试图像的分辨率与第二装置的AI识别算法相适应;或者,第一装置还可以对原始测试图像进行格式转换,例如,可以将原始测试图像调整为第二装置的AI识别算法能够处理的格式,提高同步对比测试的效率。
在一种可能的设计方式中,第一装置控制测试码流和测试结果码流共同显示在显示器上,具体包括:第一装置控制第一播放器播放测试码流;在第一装置接收到来自第二装置发送的测试结果码流之后,控制第二播放器播放测试结果码流。
上述可能的实现方式中,第一装置控制第一播放器播放测试码流的同时,第一装置对实时接收到的测试结果码流用第二播放器播放,从而第一播放器和第二播放器可以在同一个显示界面中同时播放测试码流和测试结果码流,方便测试人员对比查看测试码流和测试结果码流,提高同步对比测试的效率。
在一种可能的设计方式中,第一装置控制测试码流和测试结果码流共同显示在显 示器上,具体包括:第一装置控制测试图像以及与测试图像对应的测试结果图像同步显示。
上述可能的实现方式中,第一装置通过控制测试图像以及与测试图像对应的测试结果图像进行同步显示,可以方便测试人员逐帧进行测试图像以及与测试图像对应的测试结果图像的对比测试,或者定位测试图像能够对该指定的测试图像与该测试图像对应的测试结果图像进行对比查看,提高同步测试的灵活性,进一步提高对智能摄像头的测试效率。
在一种可能的设计方式中,第一装置控制测试图像以及与测试图像对应的测试结果图像同步显示,具体包括:第一装置通过调节第一播放器播放测试码流的播放速率,以控制第二播放器播放与第一播放器播放的测试图像对应的测试结果图像。
上述可能的实现方式中,第一装置通过调节第一播放器播放测试码流的播放速率,即调节第一播放器播放多帧测试图像的相邻帧之间的时间间隔,来来控制测试图像与对应的测试结果图像能够同步显示,方便测试人员同步对比查看测试图像与对应的测试结果图像,提高同步对比测试的效率。
在一种可能的设计方式中,当第一播放器播放测试码流的播放速率使得相邻帧的播放间隔小于测试时延时,降低第一播放器的播放速率,使得相邻帧的播放间隔大于测试时延,测试时延为第一装置将测试图像发送给第二装置到接收到第二装置返回的测试结果图像之间的时间差。
上述可能的实现方式中,当第一装置控制第一播放器播放测试码流的相邻帧播放间隔大于测试时延时,第一装置上测试码流和测试结果码流的播放是同步的,也就是测试图像和该测试图像对应的测试结果图像同步显示,从而测试人员可以同时对测试图像和该测试图像对应的测试结果图像进行对比查看,显著提高对智能摄像头对比测试直观性和测试操作的灵活性,提高测试效率。
在一种可能的设计方式中,测试结果码流包括混合图像,混合图像为测试图像和与测试图像对应的测试结果图像的拼接图像。
上述可能的实现方式中,第一装置接收的测试结果图像还可以是测试图像和该测试图像对应的测试结果图像的拼接图像,从而第一装置根据该拼接图像可以同时显示测试图像和该测试图像对应的测试结果图像,达到同步测试的效果,提高测试效率。
在一种可能的设计方式中,第一装置控制测试码流和测试结果码流共同显示在显示器上,具体包括:第一装置将测试结果码流中的混合图像显示在显示器上;或者,第一装置对测试结果码流中的混合图像逐帧进行图像分割处理,得到每帧测试图像和与每帧测试图像对应的测试结果图像;第一装置控制第一播放器播放测试图像的同时,控制第二播放器播放测试结果图像。
上述可能的实现方式中,第一装置可以通过一个播放器播放接收到的混合图像,即同时显示测试图像和该测试图像对应的测试结果图像,达到同步测试的效果;另外,第一装置还可以用两个播放器分别对测试图像和该测试图像对应的测试结果图像进行播放,都能达到同步测试的效果,提高测试效率。
在一种可能的设计方式中,该方法还包括:第一装置向第二装置发送超文本传输协议HTTP请求信息,其中,HTTP请求信息包括控制操作对应的统一资源定位符URL, 控制操作用于指示第二装置对待处理测试图像的控制。
上述可能的实现方式中,第一装置可以通过向第二装置发送具体的控制操作对应的URL,用于指示第二装置根据该控制操作处理当前的待处理测试图像。从而能够提高测试操作的灵活性,提高测试效率。
在一种可能的设计方式中,该控制操作可以包括暂停、开始、跳转到指定测试图像或者调整帧率中的至少一种处理。
上述可能的实现方式中,第一装置将对播放测试图像的具体控制操作的信息发送给第二装置,例如,开始播放、暂停播放、调整播放帧率、快进、快退、或者跳转到指定帧等操作,使得第二装置根据该控制操作处理待处理的测试图像,从而可以根据测试需要控制测试图像和测试结果图像的显示,提高测试操作的灵活性和测试效率。
在一种可能的设计方式中,HTTP请求信息还包括输入参数,输入参数包括待处理测试图像的帧序列号或者时间戳,输入参数用于指示控制操作对应的待处理测试图像。
上述可能的实现方式中,上述的请求信息中还可以包括控制操作对应的测试图像的帧序列号或者时间戳,用于第二装置根据该帧序列号或者时间戳定位所述控制操作对应的具体的测试图像,从而使得第二装置能够返回第一装置所指定的测试图像对应的测试结果图像,提高测试操作的灵活性和测试效率。
在一种可能的设计方式中,第一装置控制测试码流和测试结果码流共同显示在显示器上,具体包括:第一装置接收来自第二装置发送的服务器发送事件SSE信息,SSE信息包括第二装置当前处理的测试图像的位置信息或者时间信息;第一装置根据位置信息或者时间信息控制当前显示的测试图像和测试结果图像。
上述可能的实现方式中,位置信息是指所述测试图像在测试码流的播放进度条对应的位置信息,所述时间信息可以指测试图像对应的时间戳。第一装置可以通过第二装置上报的测试图像位置信息或者时间信息,控制当前显示的测试图像和测试结果图像,从而提高测试操作的灵活性和测试效率。
在一种可能的设计方式中,测试码流来自于第一装置的存储系统,或者第一装置的网络文件系统NFS,或者第三装置的存储系统。
上述可能的实现方式中,第一装置获取的测试码流可以来自于第一装置,或者NFS系统,或者其他装置,测试码流的来源可以灵活进行配置。
第二方面,提供一种智能摄像头的测试方法,该方法包括:第二装置逐帧接收来自第一装置发送的测试图像,其中,测试图像是第一装置从第一播放器或者解码器获取的当前处理的测试图像;第二装置逐帧对测试图像进行人工智能AI识别,得到每帧测试图像对应的测试结果图像,测试结果图像包括测试图像和对测试图像进行AI识别得到的识别结果信息;第二装置向第一装置发送测试结果码流,测试结果码流包括多帧测试结果图像。
上述技术方案中,第二装置通过实时获取第一装置的播放器或者解码器当前处理的测试图像,从而第二装置逐帧对测试图像进行AI识别得到的对应的测试结果图像后,实时发送给第一装置,使得第一装置可以控制测试码流和测试结果码流共同显示在显示器上进行测试,从而提高了技术人员对测试图像和测试结果图像对比测试的直观性, 提高测试效率。
在一种可能的设计方式中,第二装置逐帧对测试码流中的测试图像进行人工智能AI识别之前,方法还包括:第二装置获取测试码流中的多帧原始测试图像;第二装置逐帧对原始测试图像进行解压缩、格式转换或者分辨率调整中的至少一种处理后,得到测试码流的多帧测试图像。
上述可能的实现方式中,第二装置逐帧对测试图像进行解压缩处理、格式转换或者分辨率调整之后,可以得到第二装置的AI识别算法能够处理的格式;同时,由于第二装置接收的测试图像是经过第一装置压缩处理后的,能够减小第二装置接收多帧测试图像的时延,从而提高同步对比测试的效率。
第三方面,提供一种智能摄像头的测试方法,该方法包括:第二装置获取测试码流,测试码流包括多帧测试图像;第二装置逐帧对测试码流中的测试图像进行人工智能AI识别,得到每帧测试图像对应的测试结果图像,测试结果图像为测试图像与识别结果信息叠加得到的;第二装置逐帧将测试图像与测试图像对应的测试结果图像进行拼接,得到测试图像对应的混合图像;第二装置向第一装置发送测试结果码流,测试结果码流包括多帧混合图像。
上述技术方案中,第二装置发送的测试结果图像还可以是测试图像和该测试图像对应的测试结果图像的拼接图像,从而第一装置可以根据该拼接图像可以同时显示测试图像和该测试图像对应的测试结果图像,达到同步测试的效果,提高测试效率。
在一种可能的设计方式中,第二装置逐帧将测试图像与测试图像对应的测试结果图像进行拼接,具体可以包括:第二装置将测试图像和与该测试图像对应的测试结果图像进行上下拼接或者左右拼接,得到该测试图像对应的混合图像。
上述可能的实现方式中,第二装置基于通过上下拼接或者左右拼接将测试图像和与该测试图像对应的测试结果图像拼接为一张图像,发送给第一装置后使得第一装置能够同步显示,提高测试人员的测试效率。
在一种可能的设计方式中,第二装置逐帧对测试码流中的测试图像进行人工智能AI识别之前,方法还包括:第二装置对测试码流进行解码处理和/或解封装处理得到多帧测试图像。
在一种可能的设计方式中,该方法还包括:第二装置接收来自第一装置发送的超文本传输协议HTTP请求信息,其中,HTTP请求信息包括控制操作对应的统一资源定位符URL;第二装置基于控制操作对待处理的测试图像进行控制。
上述可能的实现方式中,第二装置接收来自第一装置发送的HTTP请求信息,可以根据请求信息中的URL得到对应的控制操作,从而第二装置可以根据该控制操作处理当前的待处理测试图像,能够提高第一装置对测试操作控制的灵活性,提高测试效率。
在一种可能的设计方式中,该控制操作可以包括暂停、开始、跳转到指定测试图像或者调整帧率中的至少一种处理。
上述可能的实现方式中,第二装置通过接收第一装置发送的HTTP请求信息确定具体的控制操作,例如,开始、暂停、调整播放帧率或者跳转到指定帧等操作,使得第二装置根据该控制操作处理待处理的测试图像,从而可以根据测试需要控制测试图 像和测试结果图像的显示,提高测试操作的灵活性和测试效率。
在一种可能的设计方式中,HTTP请求信息还包括输入参数,输入参数包括待处理测试图像的帧序列号或者时间戳,输入参数用于指示控制操作对应的待处理测试图像。
上述可能的实现方式中,上述的请求信息中还可以包括控制操作对应的测试图像的帧序列号或者时间戳,用于第二装置根据该帧序列号或者时间戳定位所述控制操作对应的具体的测试图像,从而使得第二装置能够返回第一装置所指定的测试图像对应的测试结果图像,提高测试操作的灵活性和测试效率。
在一种可能的设计方式中,该方法还包括:第二装置向第一装置发送服务器发送事件SSE信息,SSE信息包括第二装置当前处理的测试图像的位置信息或者时间信息,位置信息或者时间信息用于指示第一装置控制测试图像和测试结果图像的显示。
上述可能的实现方式中,位置信息是指所述测试图像在测试码流的播放进度条对应的位置信息,所述时间信息可以指测试图像对应的时间戳。第二装置通过向第一装置上报测试图像位置信息或者时间信息,从而实现了控制第一装置上当前显示的测试图像和测试结果图像,提高测试操作的灵活性和测试效率。
在一种可能的设计方式中,测试码流来自于第一装置的存储系统,或者第二装置的网络文件系统NFS,或者第三装置的存储系统。
上述可能的实现方式中,上述可能的实现方式中,第二装置获取的测试码流可以来自于第一装置,或者NFS系统,或者其他装置,测试码流的来源可以灵活进行配置。
第四方面,提供一种智能摄像头的测试装置,该测试装置包括:发送模块,用于将测试码流发送给第二装置,测试码流包括多帧测试图像;接收模块,用于接收来自第二装置发送的测试结果码流,测试结果码流包括多帧测试结果图像,测试结果图像为与测试结果图像对应的测试图像叠加识别结果信息得到的,识别结果信息为第二装置对测试图像进行人工智能AI识别得到的;控制模块,用于控制测试码流和测试结果码流共同显示在显示器上。
在一种可能的设计方式中,测试装置还包括:获取模块,用于从第一播放器或者解码器逐帧获取测试码流的多帧测试图像,第一播放器为播放测试码流的播放器;发送模块,具体用于逐帧将多帧测试图像发送给第二装置。
在一种可能的设计方式中,在第一播放器播放测试码流的情况下,获取模块,具体用于:从第一播放器获取目标测试图像,目标测试图像为第一播放器当前播放的测试图像;发送模块,具体用于将目标测试图像发送给第二装置。
在一种可能的设计方式中,获取模块,具体用于:从第一播放器或解码器获取测试码流的多帧原始测试图像;逐帧对原始测试图像进行图像压缩、格式转换或者分辨率调整中的至少一种处理后,得到测试码流的多帧测试图像。
在一种可能的设计方式中,控制模块具体用于:控制第一播放器播放测试码流;在接收到来自第二装置发送的测试结果码流之后,控制第二播放器播放测试结果码流。
在一种可能的设计方式中,控制模块具体用于:控制测试图像以及与测试图像对应的测试结果图像同步显示。
在一种可能的设计方式中,控制模块具体用于:通过调节第一播放器播放测试码 流的播放速率,控制第二播放器播放与第一播放器播放的测试图像对应的测试结果图像。
在一种可能的设计方式中,测试结果码流包括混合图像,混合图像为测试图像和与测试图像对应的测试结果图像的拼接图像。
在一种可能的设计方式中,控制模块具体还用于:将测试结果码流中的混合图像显示在显示器上;或者,对测试结果码流中的混合图像逐帧进行图像分割处理,得到每帧测试图像和与每帧测试图像对应的测试结果图像;控制第一播放器播放测试图像的同时,控制第二播放器播放测试结果图像。
在一种可能的设计方式中,发送模块还用于:向第二装置发送超文本传输协议HTTP请求信息,其中,HTTP请求信息包括控制操作对应的统一资源定位符URL,控制操作用于指示第二装置对待处理测试图像的控制。
在一种可能的设计方式中,HTTP请求信息还包括输入参数,输入参数包括待处理测试图像的帧序列号或者时间戳,输入参数用于指示控制操作对应的待处理测试图像。
在一种可能的设计方式中,接收模块还用于:接收来自第二装置发送的服务器发送事件SSE信息,SSE信息包括第二装置当前处理的测试图像的位置信息或者时间信息;控制模块,具体还用于:根据位置信息或者时间信息控制当前显示的测试图像和测试结果图像。
在一种可能的设计方式中,测试码流来自于该测试装置的存储系统,或者该测试装置的网络文件系统NFS,或者第三装置的存储系统。
第五方面,提供一种智能摄像头的测试装置,该测试装置包括:接收模块,用于逐帧接收来自第一装置发送的测试图像,其中,测试图像是第一装置从第一播放器或者解码器获取的当前处理的测试图像;AI识别模块,用于逐帧对测试图像进行人工智能AI识别,得到每帧测试图像对应的测试结果图像,测试结果图像包括测试图像与对测试图像进行AI识别得到的识别结果信息;发送模块,用于向第一装置发送测试结果码流,测试结果码流包括多帧测试结果图像。
在一种可能的设计方式中,测试装置还包括:处理模块,用于逐帧对接收模块获取的多帧原始测试图像进行解压缩、格式转换或者分辨率调整中的至少一种处理后,得到测试码流的多帧测试图像。
第六方面,提供一种智能摄像头的测试装置,该测试装置包括:获取模块,用于获取测试码流,测试码流包括多帧测试图像;AI识别模块,用于逐帧对测试码流中的测试图像进行人工智能AI识别,得到每帧测试图像对应的测试结果图像,测试结果图像包括测试图像与对测试图像进行AI识别得到的识别结果信息;处理模块,用于逐帧将测试图像与测试图像对应的测试结果图像进行拼接,得到测试图像对应的混合图像;发送模块,用于向第一装置发送测试结果码流,测试结果码流包括多帧混合图像。
在一种可能的设计方式中,该测试装置还包括解码模块,用于:对测试码流进行解码处理和/或解封装处理得到多帧测试图像。
在一种可能的设计方式中,测试装置还包括:接收模块,用于接收来自第一装置发送的超文本传输协议HTTP请求信息,其中,HTTP请求信息包括控制操作对应的 统一资源定位符URL;AI识别模块,具体还用于基于控制操作对待处理的测试图像进行控制。
在一种可能的设计方式中,HTTP请求信息还包括输入参数,输入参数包括待处理测试图像的帧序列号或者时间戳,输入参数用于指示控制操作对应的待处理测试图像。
在一种可能的设计方式中,发送模块还用于:向第一装置发送服务器发送事件SSE信息,SSE信息包括第二装置当前处理的测试图像的位置信息或者时间信息,位置信息或者时间信息用于指示第一装置控制测试图像和测试结果图像的显示。
在一种可能的设计方式中,测试码流来自于第一装置的存储系统,或者第二装置的网络文件系统NFS,或者第三装置的存储系统。
第七方面,提供一种智能摄像头的测试装置,该装置包括处理器和传输接口,该处理器被配置为执行存储在存储器中的指令,以执行:
通过该传输接口将测试码流发送给第二装置,测试码流包括多帧测试图像;通过该传输接口接收来自第二装置发送的测试结果码流,测试结果码流包括多帧测试结果图像,测试结果图像为与测试结果图像对应的测试图像叠加识别结果信息得到的,识别结果信息为第二装置对测试图像进行人工智能AI识别得到的;控制测试码流和测试结果码流共同显示在显示器上。
在一种可能的设计方式中,该处理器具体用于执行:从第一播放器或者解码器逐帧获取测试码流的多帧测试图像,第一播放器为播放测试码流的播放器;逐帧将多帧测试图像发送给第二装置。
在一种可能的设计方式中,该处理器具体用于执行:在第一播放器播放测试码流的情况下,从第一播放器获取目标测试图像,目标测试图像为第一播放器当前播放的测试图像;将目标测试图像发送给第二装置。
在一种可能的设计方式中,该处理器具体用于执行:获取测试码流的多帧原始测试图像,逐帧对所述原始测试图像进行图像压缩、格式转换或者分辨率调整中的至少一种处理,得到所述测试码流的所述多帧测试图像。
在一种可能的设计方式中,该处理器具体用于执行:控制第一播放器播放所述测试码流;在接收到来自所述第二装置发送的所述测试结果码流之后,控制第二播放器播放所述测试结果码流。
在一种可能的设计方式中,该处理器具体用于执行:控制所述测试图像以及与所述测试图像对应的测试结果图像同步显示。
在一种可能的设计方式中,该处理器具体用于执行:通过调节所述第一播放器播放所述测试码流的播放速率,以控制所述第二播放器播放与所述第一播放器播放的测试图像对应的测试结果图像。
在一种可能的设计方式中,测试结果码流包括混合图像,所述混合图像为所述测试图像和与所述测试图像对应的测试结果图像的拼接图像。
在一种可能的设计方式中,该处理器具体用于执行:将所述测试结果码流中的所述混合图像显示在显示器上;或者,对所述测试结果码流中的所述混合图像逐帧进行图像分割处理,得到每帧测试图像和与所述每帧测试图像对应的测试结果图像;控制 所述第一播放器播放所述测试图像的同时,控制第二播放器播放所述测试结果图像。
在一种可能的设计方式中,该处理器还用于执行:通过该传输接口向所述第二装置发送超文本传输协议HTTP请求信息,其中,所述HTTP请求信息包括控制操作对应的统一资源定位符URL,所述控制操作用于指示所述第二装置对待处理测试图像的控制。
在一种可能的设计方式中,所述HTTP请求信息还包括输入参数,所述输入参数包括所述待处理测试图像的帧序列号或者时间戳,所述输入参数用于指示所述控制操作对应的所述待处理测试图像。
在一种可能的设计方式中,该处理器还用于执行:通过该传输接口接收来自所述第二装置发送的服务器发送事件SSE信息,所述SSE信息包括所述第二装置当前处理的测试图像的位置信息或者时间信息;根据所述位置信息或者所述时间信息控制当前显示的测试图像和测试结果图像。
在一种可能的设计方式中,测试码流来自于所述测试装置的存储系统,或者所述测试装置的网络文件系统NFS,或者第三装置的存储系统。
第八方面,提供一种智能摄像头的测试装置,该装置包括处理器和传输接口,该处理器被配置为执行存储在存储器中的指令,以执行第二方面或该第二方面任一种可能的设计中的测试方法:
第九方面,提供一种智能摄像头的测试装置,该装置包括处理器和传输接口,该处理器被配置为执行存储在存储器中的指令,以执行第三方面或该第三方面任一种可能的设计中的测试方法。
第十方面,提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令由计算机或处理器执行时,使得所述计算机或所述处理器能够执行如第一方面中任一项所述的智能摄像头的测试方法。
第十一方面,提供一种计算机程序产品,当所述计算机程序产品在计算机或处理器上运行时,使得所述计算机或所述处理器执行如第一方面中任一项所述的智能摄像头的测试方法。
第十二方面,提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令由计算机或处理器执行时,使得所述计算机或所述处理器能够执行如第二方面中任一项所述的智能摄像头的测试方法。
第十三方面,提供一种计算机程序产品,当所述计算机程序产品在计算机或处理器上运行时,使得所述计算机或所述处理器执行如第二方面中任一项所述的智能摄像头的测试方法。
第十四方面,提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令由计算机或处理器执行时,使得所述计算机或所述处理器能够执行如第三方面中任一项所述的智能摄像头的测试方法。
第十五方面,提供一种计算机程序产品,当所述计算机程序产品在计算机或处理器上运行时,使得所述计算机或所述处理器执行如第三方面中任一项所述的智能摄像头的测试方法。
可以理解地,上述提供的任一种智能摄像头的测试装置、计算机可读存储介质和 计算机程序产品,均可以用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种智能摄像头的测试方法的应用场景示意图;
图2为本申请实施例提供的一种智能摄像头的测试方法的流程示意图;
图3为本申请实施例提供的一种AI识别的图像处理效果的示意图;
图4为本申请实施例提供的一种智能摄像头的测试方法的处理过程示意图;
图5为本申请实施例提供的另一种智能摄像头的测试方法的流程示意图;
图6为本申请实施例提供的另一种智能摄像头的测试方法的处理过程示意图;
图7为本申请实施例提供的一种智能摄像头的测试装置的示意图;
图8为本申请实施例提供的另一种智能摄像头的测试装置的示意图;
图9为本申请实施例提供的另一种智能摄像头的测试装置的示意图;
图10为本申请实施例提供的一种电子设备的结构示意图;
图11为本申请实施例提供的一种芯片示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
首先,对本申请实施例的实施环境和应用场景进行简单介绍。
本申请实施例提供一种智能摄像头的测试方法和测试装置,可以应用于对智能拍照装置的测试,例如智能摄像头和配置有智能摄像头组件的其他装置或者电子设备。示例性的,该实施例的应用场景可以如图1所示。
其中,第一装置是指测试操作台,也就是进行测试操作、查看测试结果和记录测试结果的装置或者电子设备。具体的,第一装置可以为计算机、个人计算机(personal computer,PC)、笔记本电脑或者超级移动个人计算机(ultra-mobile personal computer,UMPC),还可以是一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。示例性的,本申请的下述实施例中以测试PC为例进行说明。
第一装置可以通过通用串行总线(universal serial bus,USB)网络通道,或者USB虚拟串口通道与第二装置建立连接,以使得第一装置与第二装置可以通过该连接传输控制信息和/或码流数据等。其中,USB通道是第一装置连接外部设备的标准通信端口, USB虚拟串口是通过USB通信设备类在第一装置虚拟出的串口,用于为第一装置101提供通信传输。
第二装置是指内置有AI算法的智能摄像头AIC,或者配置有AIC的其他装置或者电子设备。其中,AIC是指具有AI计算能力的摄像头,可以对自身拍摄的对象或者输入的图像通过内置的AI算法进行计算,从而识别出目标拍摄对象或者检测出图像中的物体等。第二装置可以通过配置的USB接口,与第一装置实现通信互连,传输音视频数据。或者,第二装置还可以通过蓝牙技术、无线网络或者有线网络与第一装置实现数据交互,例如可以基于网际互连协议(internet protocol,IP)传输视频流。示例性的,通过IP协议中的超文本传输协议(hyper text transfer protocol,HTTP)或者WebSocket协议传输视频流,其中,HTTP协议是单向通信协议,客户端发起HTTP请求,服务端返回数据;WebSocket协议是双向通信协议,在客户端和服务器建立连接之后,客户端和服务器都可以主动向对方发送或接收数据。
示例性的,本申请实施例中涉及的AIC可以支持USB视频类(USB Video Class,UVC)协议,且支持视频编码和解码能力。其中,UVC是一种为USB视频捕获的电子设备而定义的协议标准,一般提供USB接口的摄像头设备可以支持该标准的应用和实现。
本申请的应用场景基于对该第二装置的智能识别功能进行测试,通过对第二装置内置的AI识别算法的计算结果进行测试,确定识别AI算法是否能准确识别出被拍摄的对象,或者该AI算法是否解决了预期的问题。测试的输入数据包括视频数据或者图像集合,第二装置可以对输入的视频数据或者图像集合叠加AI识别的结果后得到测试结果图像,也即测试结果图像可以为叠加了AI识别结果的输出视频数据或者图像集合,测试结果图像可以包括测试图像和对测试图像进行AI识别得到的AI信息。
本申请实施例中的测试原理为:将预先获取的视频码流或者图片集合作为测试码流,由AIC逐帧对测试码流进行AI识别计算,实时得到测试码流的每一帧测试图像对应的识别结果即为测试结果图像。测试PC对实时生成的测试结果图像和原始的测试图像进行同步对比,从而完成同步测试得出测试结论。
本申请实施例提供一种智能摄像头的测试方法,如图2所示,该方法具体可以包括如下步骤:
S201:第一装置将测试码流发送给第二装置,测试码流包括多帧测试图像。
其中,第一装置可以从测试码流库中获取测试码流。测试码流库是存储测试码流的存储系统,测试码流库可以为部署在第一装置上的一个文件系统,也可以是存储在第一装置或者其他服务器上的文件系统,通过网络文件系统(network file system,NFS)共享给第二装置,还可以是第三装置的存储系统。因此,测试码流可以来自于第一装置的存储系统,或者NFS,或者第三装置,本申请实施例对此不做具体限定。
其中,NFS是基于用户数据报协议(User Datagram Protocol,UDP)或者网际互连协议(Internet Protocol,IP)的应用系统,主要通过采用远程过程调用(Remote Procedure Call,RPC)机制来实现的。RPC提供了一组与机器、操作系统以及低层传送协议无关的存取远程文件的操作。NFS是文件系统之上的一个网络抽象,电子设备利用NFS系统可以实现通过网络访问远程客户端的文件系统,且访问操作与该电子设 备访问其本地文件系统类似的方式实现。
测试码流可以是图像集合,也可以是经过编码压缩处理的视频码流。其中,视频是指连续的图像序列,由连续的帧构成,一帧即为一幅图像。由于人眼的视觉暂留效应,当帧序列以一定的速率播放时,我们看到的就是动作连续的视频。由于连续的帧之间图像的相似性较高,为便于储存和传输,可以对原始视频进行编码压缩,以去除冗余。视频编码方式就是指通过压缩技术,将原始视频格式的文件转换另一种视频格式文件的方式。视频码流为将原始视频文件进行编码后传输的数据。例如,根据国际电联的编解码标准H.264生成的H.264视频帧流,或者根据编解码标准H.266生成的H.266视频帧流。
在一种实施方式中,第一装置可以从视频播放器或者解码器获取测试码流的每一帧测试图像,并逐帧将所述测试图像发送给第二装置。
示例性的,第一装置可以直接在播放测试码流的视频播放器中获取每一帧测试图像,例如,第一播放器。也可以从解码该测试码流的解码器中获取每一帧测试图像。第一装置具体可以根据视频播放器或者解码器提供的接口来选择从视频播放器获取多帧测试图像还是从解码器获取多帧测试图像。
第一装置逐帧获取每一帧测试图像后,逐帧发送给第二装置。示例性的,第一装置获取的测试码流中的每帧测试图像的原始格式可以为RGB或YUV格式的图像,还可以是经过编码压缩的视频帧流,例如H.264视频帧流。
在另一种实施方式中,第一装置从视频播放器或者解码器获取测试码流的每帧测试图像,并逐帧将每帧测试图像发送给第二装置,还可以包括:
第一装置逐帧获取测试码流的多帧原始图像,逐个对每一帧原始图像进行编码压缩之后,将压缩后的每帧测试图像发送给第二装置。
可选的,第一装置还可以对多帧原始图像调整分辨率后,发送给第二装置。
可选的,第一装置还可以对多帧原始图像进行格式转换后,发送给第二装置。
其中,对原始图像进行编码压缩的处理、调整图像分辨率的处理或格式转换后得到的测试图像为第二装置中的AI算法可以接收和处理的图像。
图像的压缩格式以第二装置支持的图像格式为准,示例性的,可以将测试图像压缩为联合图像专家组(joint photographic experts group,JPEG)开发的JPEG图像格式,或者便携式网络图形(portable network graphics,PNG)格式等。当第二装置支持JPEG格式时,第一装置可以选择将截获的每一帧原始图像压缩为JPEG后,发送给第二装置;当第二装置支持PNG格式时,第一装置可以选择将截获的每一帧原始图像压缩为PNG后,发送给第二装置。
需要说明的是,可以根据第一装置与第二装置之间传输测试图像的USB带宽资源决定是否对测试图像进行图像压缩。当第一装置与第二装置之间传输测试图像的USB带宽足够传输原始图像时,第一装置对多帧测试图像不进行压缩,直接向第二装置发送原始测试图像;当第一装置与第二装置之间传输测试图像的USB带宽不够时,第一装置可以对多帧测试图像进行图像压缩,再向第二装置发送压缩处理后的测试图像,从而可以节省带宽资源,同时也能降低发送测试图像而产生的延时。
在另一种实施方式中,第一装置获取测试码流的多帧测试图像,并逐帧将每帧测 试图像发送给第二装置,还可以包括:
第一装置逐帧获取测试码流的多帧原始图像,逐个对每一帧原始图像进行分辨率调整,生成测试码流的每一帧测试图像。
当第二装置能够处理的图像分辨率与第一装置获取的多帧测试图像不一致时,第一装置可以对每一帧测试图像进行分辨率调整后,再逐帧发送给第二装置。例如,第一装置获取的多帧测试图像的分辨率为1280×800,而第二装置能够处理的图像分辨率为1024×600,则第一装置可以逐帧将测试图像的分辨率调整为1024×600之后,再发送给第二装置。
S202:第二装置接收来自第一装置发送的多帧测试图像。
S203:第二装置逐帧对测试图像进行人工智能AI识别,得到每帧测试图像对应的测试结果图像。
第二装置逐帧接收来自第一装置发送的测试图像,根据预先配置的AI识别算法进行处理,得到每一帧测试图像对应的测试结果图像。所述测试结果图像与所述测试图像是一一对应的,测试结果图像与测试图像的对应关系可以通过相同的帧序列号、视频播放的进度条信息或者帧时间戳等信息确定。
其中,测试结果图像可以是测试图像与AI信息叠加之后的图像,该AI信息为通过AI算法对测试图像进行计算后得到的识别结果信息。因此,测试结果图像可以包括测试图像和对测试图像进行AI识别得到的AI信息,其中,AI信息可以用图形表示,或者用标签表示等。
示例性的,对于物体检测类AI识别算法,AI识别计算后可以在图像中添加矩形框将检测到的物体框起来,形成测试结果图像。对于物体识别类的AI识别算法,AI识别计算后可以得到添加了物体标签的测试结果图像,例如,图3中所示的,第二装置识别图像1后输出的测试结果图像为对图像1添加了标签为房子的结果,第二装置识别图像2后输出的测试结果图像为对图像2添加了标签为人物1和人物2的结果。
本申请实施例中的AI识别算法具体可以通过神经网络模型或者支持向量机算法来实现,本申请对具体的AI识别算法不做限定,对测试结果图像的表示方法也不做具体限定,本领域技术人员可以根据实际需求进行选择和设置。
S204:第二装置向第一装置发送测试结果码流,测试结果码流包括多帧测试结果图像。
第二装置逐帧将上述得到的每帧测试结果图像发送给第一装置。
在一种实施方式中,为了减小测试结果图像的传输延时以及降低数据传输的带宽占用,第二装置可以逐帧对测试结果图像进行编码压缩后,生成测试结果码流,第二装置再将测试结果码流逐帧发送给第一装置。
S205:第一装置接收来自第二装置发送的测试结果码流。
在一种实施方式中,第一装置接收来自第二装置发送的结果码流,经过解码后获取每帧测试结果图像。
S206:第一装置控制测试码流和测试结果码流共同显示在显示器上。
第一装置可以在收到第一帧测试结果图像的时候,第一装置自动打开另一个视频播放器如第二播放器,对测试结果图像逐帧进行播放。
第一装置控制测试码流和测试结果码流共同显示在显示器上,具体包括:第一装置控制第一播放器播放测试码流,同时,在第一装置接收到来自第二装置发送的测试结果码流之后,第一装置控制第二播放器播放测试结果码流。
示例性的,当第一装置为测试PC,第二装置为智能摄像头AIC,当测试PC接收到AIC发送的测试结果码流的第一帧测试结果图像时,测试PC通过第二播放器播放该第一帧测试结果图像,并逐帧播放接收到的测试结果码流。
如图4所示,以第一装置是测试PC,第二装置是AIC为例,测试PC开始测试的时候,测试PC上的第一播放器从测试码流库获取待播放的测试视频,并逐帧获取测试视频的每一帧测试图像,经过编码压缩处理后逐帧发送给AIC。AIC经过解码获取测试图像后,逐帧对测试图像进行AI识别后,得到每一帧测试结果图像并进行编码后发送测试PC。当测试PC接收到AIC发送的测试结果图像时,测试PC上的第二播放器逐帧播放测试结果视频,该测试结果视频包括多帧测试结果图像。
其中,本申请实施例中的第一播放器和第二播放器可以是网页(web)版视频播放器,也可以是第一装置上安装的客户端版的视频播放器,本申请对此不做具体限定。
由于本申请的上述实施方式中,发送测试图像、AI计算以及接收测试结果图像都是逐帧进行的,且测试视频的播放和测试结果视频的播放都是按照图像帧的序列顺序逐帧进行播放的,因此,第一装置通过两个播放器分别对测试码流和测试结果码流进行逐帧播放,并且第一播放器和第二播放器可以在同一个显示界面中同时播放测试码流和测试结果码流,方便测试人员对比查看测试码流和测试结果码流,提高同步对比测试的效率。
在一种实施方式中,第一装置可以在第一播放器播放测试码流的情况下,从第一播放器获取当前播放的测试图像作为目标测试图像,将该目标测试图像发送给第二装置。
其中,目标测试图像为第一装置获取的第一播放器当前播放的测试图像,即第一装置可以将当前第一播放器播放的测试图像实时发送给第二装置,从而使得第二装置实时根据第一装置发送的测试图像进行AI识别处理,并实时将得到的测试图像对应的测试结果图像发送给第一装置。第一装置逐帧显示每帧测试结果图像,在显示测试结果图像的同时,显示与该测试结果图像对应的测试图像。
需要说明的是,当第一装置发送每帧测试图像,到第二装置对每帧测试图像进行AI识别的处理,以及第一装置接收每帧测试结果图像的测试时延可以忽略不计时,或者上述测试时延小于第一播放器逐帧播放测试图像的相邻帧播放间隔时,可以认为第一装置上测试码流和测试结果码流的播放是同步的,也就是测试图像和该测试图像对应的测试结果图像同步显示,从而测试人员可以同时对测试图像和该测试图像对应的测试结果图像进行对比查看,显著提高对智能摄像头对比测试直观性和测试操作的灵活性,提高测试效率。
因此,第一装置可以通过控制第一播放器对测试图像的播放控制,来实现对第二播放器的测试结果图像的播放控制。技术人员可以通过对第一装置上运行的播放器的控制操作,例如,暂停、播放、快进或者调整帧率等操作,实现对第二装置的图像处理的控制,从而达到测试视频和测试结果视频的实时同步。
示例性的,测试PC接收技术人员点击的视频暂停播放操作,测试PC暂停向AIC发送测试图像,则AIC暂停图像处理;当点击视频播放操作时,则AIC持续接收测试图像,进行图像处理后发送测试结果到测试PC,实现播放同步;技术人员对测试PC设置播放速度,例如设置帧率为15Hz,则AIC端按照15帧/秒接收测试图像,从而测试PC上设置的测试视频播放速度即为测试结果视频的播放速度。
在一种可能的实施方式中,第一装置可以通过如下具体方式,控制测试图像以及与测试图像对应的测试结果图像同步显示:第一装置通过调节第一播放器播放测试码流的播放速率,以控制第二播放器播放与第一播放器播放的测试图像所对应的测试结果图像。
具体的,第一装置可以控制第一播放器播放测试码流的播放速率,使得当第一播放器播放测试码流的相邻帧的播放间隔小于上述的测试时延时,第一装置可以降低第一播放器的播放速率。其中,第一装置降低第一播放器的播放速率的方式可以为人为控制,也可以通过第一装置根据预设条件自动调节播放速率。
示例性的,当测试PC上第一播放器初始的播放速率是30帧/秒,即第一播放器播放测试码流的相邻帧的时间间隔为0.33秒,而通过检测确定第一装置发送一帧测试图像到接收到该测试图像对应的测试结果图像的时延为1秒,则为了在测试PC上对测试码流和测试结果码流进行同步对比测试,可以调整第一播放器降低播放速率。例如,将第一播放器调整为慢速逐帧播放测试码流,播放相邻帧的时间间隔为15秒,则第一播放器播放测试码流的相邻帧的播放间隔远大于测试时延,此时,第一播放器播放的测试图像与第二播放器播放的测试结果图像是同步的。
上述可能的实现方式中,当第一装置控制第一播放器播放测试码流的相邻帧播放间隔大于所述测试时延时,第一装置上测试码流和测试结果码流的播放是同步的,也就是测试图像和该测试图像对应的测试结果图像同步显示,从而测试人员可以同时对测试图像和该测试图像对应的测试结果图像进行对比查看,显著提高对智能摄像头对比测试直观性和测试操作的灵活性,提高测试效率。
本申请实施例还提供另一种智能摄像头的测试方法,即上述步骤S203中的,第二装置对测试图像进行AI识别得到测试图像对应的测试结果图像之后,如图5所示,该测试方法还可以包括:
S501:第二装置逐帧将测试图像与测试图像对应的测试结果图像进行拼接,得到测试图像对应的混合图像。
在一种实施方式中,图像拼接的方式可以为上下拼接,第二装置可以逐个将每帧测试图像和每帧测试图像对应的测试结果图像进行上下拼接处理,得到每一帧测试图像的混合图像。例如,测试图像显示在上,该测试图像对应的测试结果图像显示在下,得到一张混合图像。或者反过来,测试图像对应的测试结果显示在下,测试图像显示在下。
在另一种实施方式中,第二装置可以逐个将每帧测试图像和每帧测试图像对应的测试结果图像进行左右拼接,得到每帧测试图像的混合图像。例如,测试图像显示在左,该测试图像对应的测试结果图像显示在右,得到一张混合图像。或者反过来,测试图像对应的测试结果显示在左,测试图像显示在右。
S502:第二装置向第一装置发送测试结果码流,测试结果码流包括多帧混合图像。
第二装置可以逐帧将上述处理得到的混合图像发送给第一装置,也可以将多帧混合图像进行编码压缩后一起发送给第一装置。
基于上述实施方式,则前述的步骤S206中,第一装置控制测试码流和测试结果码流共同显示在显示器上,具体还可以包括如下的两种显示方式:
方式一:
第一装置将测试结果码流中的混合图像显示在显示器上。
在这种实施方式中,第一装置可以根据接收到的多帧混合图像逐帧进行播放,即第一装置逐帧显示测试结果码流中的多帧混合图像,该混合图像中包括测试图像和测试结果图像,且该测试图像和测试结果图像是一一对应的,测试人员可以逐帧进行同步对比测试,完成测试视频和测试结果视频的实时同步对比。由于每一帧测试图像与对应的测试结果图像拼接在一副图像中显示,因此,方便技术人员同时进行对比测试,能够提高测试效率。
示例性的,如图6所示,以第一装置为测试PC,第二装置为AIC为例,AIC从测试码流库获取测试码流,经过解封装或者解码处理后,得到原始的测试图像,对测试图像进行AI识别后得到测试结果图像。AIC逐帧将测试图像和测试结果图像进行图像拼接后得到混合图像。AIC将多帧混合图像发送给测试PC,从而测试PC可以根据接收到的混合图像进行逐帧播放,完成实时的对比测试。
方式二:
步骤一:第一装置对测试结果码流中的混合图像逐帧进行图像分割处理,得到每帧测试图像和与每帧测试图像对应的测试结果图像。
在这一种实施方式中,第一装置还可以根据接收到的多帧混合图像进行处理,得到每一帧测试图像和与每一帧测试图像对应的测试结果图像,然后再分别对测试视频和测试结果视频进行逐帧播放,以对比测试结果,应当理解,测试视频包括多帧测试图像,测试结果视频包括多帧测试结果图像。
示例性的,第一装置可以根据上述图像拼接的方式进行图像分割,将每帧混合图像再拆分为每帧测试图像和测试结果图像,如该混合图像是左右拼接的(测试图像在左,测试结果图像在右),第一装置可以将混合图像的左半部分截取生成测试图像,右半部分截取生成测试结果图像,再分别进行播放实现同步测试。
步骤二:第一装置控制第一播放器播放测试图像的同时,控制第二播放器播放测试结果图像。
第一装置根据上述分割得到的测试图像和对应的测试结果图像,控制两个播放器分别进行播放;并且第一播放器播放某个测试图像的同时,控制第二播放器播放该测试图像所对应的测试结果图像。从而测试人员可以实时进行同步对比测试,测试直观性强,可以实时发现测试问题,提高测试效率。
在一种实施方式中,如第二装置从测试码流库获取的测试码流是封装格式的,而第二装置不能对封装的数据直接处理,在获取每一帧测试图像之前,还需要对测试码流进行解封装操作。
示例性的,如从测试码流库中获取的测试码流是动态图像专家组(moving picture  experts group 4,MP4)格式的,而MP4是用于音频和视频信息的封装标准。则第二装置需要将MP4格式的测试码流进行解封装处理,得到其中的图像部分,例如解封装得到H.264裸流。如测试码流本身是H.264裸流格式时,则不需要进行解封装处理,可以基于H.264裸流得到每一帧测试图像。
在一种实施方式中,第二装置从测试码流库获取的测试码流是经过编码压缩的格式,在获取每一帧测试图像之前,还需要对测试码流进行解码操作,从而得到每一帧测试图像。
示例性的,如测试码流本身是H.264裸流格式时,则不需要进行解码处理,可以基于H.264裸流得到每一帧测试图像。如果码流是JPEG格式或者PNG格式时,则需要根据压缩规则进行解压缩,得到每一帧测试图像。
在一种可能的实施方式中,在实时同步测试的过程中,技术人员需要对测试视频的播放速度和进度进行控制,例如,通过操作第一装置上的视频播放器对测试视频进行暂停、播放、快进、快退、拖动或者逐帧慢速测试等操作。因此,第一装置可以通过预先定义的控制协议,与第二装置建立通信并传输控制信息,从而对第二装置的处理过程进行控制,进一步控制第一装置上的测试视频和测试结果视频的播放同步。例如,控制第二装置开始处理和暂停处理视频帧,控制第二装置的处理帧率,或者定位第二装置当前处理的视频帧等操作。因此,该测试方法还可以包括:
第一装置向第二装置发送超文本传输协议HTTP请求信息,其中,请求信息中可以包括统一资源定位符(uniform resource locator,URL),用于第二装置根据URL获取具体的控制操作,以实现对待处理测试图像的控制。
第二装置接收到该HTTP请求信息后,可以基于该控制操作对待处理的测试图像进行控制。
例如,第一装置可以通过操作第一播放器上的暂停、播放、快进、快退、控制播放速度、跳转到指定视频帧或者调整帧率等操作。第一装置可以响应于技术人员对视频播放的操作控制,向第二装置发送对应的HTTP请求信息,以对第二装置的图像处理进行控制,从而实现对混合码流的播放控制。技术人员通过操作测试PC能够灵活地控制视频测试进度,因此实时对比测试的直观性强,能够有效提高测试效率。
其中,第一装置和第二装置内置有网站服务器(web server),如HTTP服务器,第一装置和第二装置进行网络通信可以使用HTTP协议进行控制信令的传输。其中,HTTP协议可以用于传输万维网(world wide web,WWW)服务的网页信息,且HTTP协议是明文传输的。
例如,测试PC可以向AIC发送HTTP请求信息,通过HTTP请求信息中包括的控制信令,实现对AIC上处理视频帧的控制。具体的,可以通不同的统一资源定位符URL)来标识具体的控制操作,例如,播放、暂停、调整帧率或者跳转等。该请求信息中可以包括具体的控制操作所对应的URL,用于AIC根据该URL地址,获取具体的控制操作,从而实现对结果码流的播放控制。该请求信息中可以包括输入参数,用于标识与前述控制操作对应的测试图像的信息。
示例性的,测试PC可以向AIC发送JavaScript对象符号(JavaScript object notation,JSON)格式的请求信息,该请求信息中可以包括控制操作的URL,还可以携带输入参 数,如输入参数可以包括测试PC请求下一帧播放的测试图像信息,例如,输入参数可以包括帧序列或者帧时间戳等。AIC收到该请求信息后,根据请求信息中的URL确定控制操作,根据输入参数确定测试PC请求播放的测试图像信息后,向测试PC返回JSON格式的响应信息,该响应信息中可以包括输出参数,即AIC成功接收测试PC发送的控制信息的响应,或者是接收失败的响应。从而,AIC根据接收到的控制操作,对当前处理的测试图像进行调整后,向测试PC发送测试结果图像,从而实现对测试视频和测试结果视频的当前播放帧的调整,使得测试视频和测试结果视频实现实时的同步对比查看。
其中,JSON格式是一种数据传输格式的标准规范,是与开发语言无关的、轻量级的数据存储和传输的格式。
例如,测试PC向AIC发送的HTTP请求信息包括跳转到指定测试图像的控制操作,则该HTTP请求信息还可以包括该控制操作对应的待处理的指定测试图像的帧序列号。AIC接收到该HTTP请求信息后,可以基于该控制操作和输入参数,确定指定测试图像的帧序列号并将当前处理的测试图像调整为该指定测试图像。
上述可能的实现方式中,上述的请求信息中还可以包括控制操作对应的测试图像的帧序列号或者时间戳,用于第二装置根据该帧序列号定位控制操作对应的具体的测试图像,或者根据时间戳和测试码流的总时长确定具体的测试图像,从而使得第二装置能够返回第一装置所指定的测试图像对应的测试结果图像,提高测试操作的灵活性和测试效率。
另外,在一种可能的实现方式中,第一装置和第二装置之间的网络通信还可以采用服务器发送事件(server sent svent,SSE)作为承载协议。
第二装置可以向第一装置发送服务器发送事件SSE信息,该SSE信息可以包括第二装置当前处理的测试图像的位置信息或者时间信息;则第一装置接收来自第二装置发送的SSE信息,根据位置信息或者时间信息控制当前显示的测试图像和测试结果图像。
其中,SSE是基于HTTP协议的,可以通过流信息向网页浏览器推送信息,SSE是服务器向接收端浏览器发送的单向传输通道。SSE发送的不是一次性的数据包,而是一个数据流,可以连续不断地发送。发送端电子设备发送数据流的过程中,接收端电子设备不会关闭连接,并持续接收发送端电子设备(如服务器)发送新的数据流,例如,视频播放就是持续发送视频帧序列。
测试图像的位置信息是指在测试码流的播放过程中,指定的该测试图像位于播放进度条所对应的位置信息。时间信息是指测试图像对应的时间戳。
进一步的,第二装置可以设置为每隔预设的时间,即通过SSE向第一装置上报当前处理的测试图像信息,即第二装置周期性向第一装置上报当前处理的测试图像信息。
示例性的,AIC可以定期向测试PC上报当前处理的测试图像位置信息、或者测试图像在帧序列中的序号或者时间戳等信息,从而测试PC可以根据AIC上报的时间戳和预先配置的测试视频的总时长等信息确定当前测试图像的信息,或者根据帧序列号和总帧数确定当前测试图像的信息,或者根据测试图像的位置信息和测试码流的总进度确定当前处理的测试图像信息。从而调整测试PC上视频播放器的当前播放视频 帧,实现实时的同步测试。
在上述的实施方式中,第一装置通过与第二装置建立通信连接,通过定期获取第二装置处理的测试图像位置信息或者时间信息,控制第一装置当前显示的测试图像和测试结果图像。即通过控制协议实现第一装置播放的视频帧和第二装置进行AI计算的视频帧是同步的,从而技术人员可以根据测试需要灵活地调整播放进度,同步对比测试更加直观,提高测试操作的灵活性和测试效率。
可以理解的是,上述第一装置或者第二装置等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中实施例描述的各示例的单元及算法操作,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对第一装置或第二装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
比如,以采用集成的方式划分各个功能模块的情况下,图7示出了一种智能摄像头的测试装置结构示意图。该测试装置可以为第一装置或者第一装置中的芯片或者片上系统,或其他可实现上述第一装置功能的组合器件、部件等,该测试装置可以用于执行上述实施例中涉及的第一装置的功能。
作为一种可能的实现方式,如图7所示,该测试装置700可以包括:
发送模块701,用于将测试码流发送给第二装置,测试码流包括多帧测试图像。
接收模块702,用于接收来自第二装置发送的测试结果码流,测试结果码流包括多帧测试结果图像,测试结果图像为与测试结果图像对应的测试图像叠加识别结果信息得到的,识别结果信息为第二装置对测试图像进行人工智能AI识别得到的。
控制模块703,用于控制测试码流和测试结果码流共同显示在显示器上。
在一种可能的设计方式中,该测试装置700还可以包括:
获取模块,用于从第一播放器或者解码器逐帧获取测试码流的多帧测试图像,第一播放器为播放测试码流的播放器。
发送模块701,具体用于逐帧将多帧测试图像发送给第二装置。
在一种可能的设计方式中,在第一播放器播放测试码流的情况下,该获取模块具体用于:从第一播放器获取目标测试图像,目标测试图像为第一播放器当前播放的测试图像。发送模块701具体用于将目标测试图像发送给第二装置。
在一种可能的设计方式中,该获取模块具体用于:从第一播放器或解码器获取测试码流的多帧原始测试图像;逐帧对原始测试图像进行图像压缩、格式转换或者分辨率调整中的至少一种处理后,得到测试码流的多帧测试图像。
在一种可能的设计方式中,控制模块703具体用于:控制第一播放器播放测试码流;在接收到来自第二装置发送的测试结果码流之后,控制第二播放器播放测试结果 码流。
在一种可能的设计方式中,控制模块703具体用于:控制测试图像以及与测试图像对应的测试结果图像同步显示。
在一种可能的设计方式中,控制模块703具体用于:通过调节第一播放器播放测试码流的播放速率,控制第二播放器播放与第一播放器播放的测试图像对应的测试结果图像。
在一种可能的设计方式中,测试结果码流包括混合图像,混合图像为测试图像和与测试图像对应的测试结果图像的拼接图像。
在一种可能的设计方式中,控制模块703具体还用于:将测试结果码流中的混合图像显示在显示器上;或者,对测试结果码流中的混合图像逐帧进行图像分割处理,得到每帧测试图像和与每帧测试图像对应的测试结果图像;控制第一播放器播放测试图像的同时,控制第二播放器播放测试结果图像。
在一种可能的设计方式中,发送模块701还用于:向第二装置发送超文本传输协议HTTP请求信息,其中,HTTP请求信息包括控制操作对应的统一资源定位符URL,控制操作用于指示第二装置对待处理测试图像的控制。
在一种可能的设计方式中,HTTP请求信息还包括输入参数,输入参数包括待处理测试图像的帧序列号或者时间戳,输入参数用于指示控制操作对应的待处理测试图像。
在一种可能的设计方式中,接收模块702还用于:接收来自第二装置发送的服务器发送事件SSE信息,SSE信息包括第二装置当前处理的测试图像的位置信息或者时间信息;控制模块,具体还用于:根据位置信息或者时间信息控制当前显示的测试图像和测试结果图像。
在一种可能的设计方式中,测试码流来自于测试装置的存储系统,或者该测试装置的网络文件系统NFS,或者第三装置的存储系统。
本申请实施例还提供另外一种智能摄像头的测试装置,如图8所示,该测试装置800可以包括:
接收模块801,用于逐帧接收来自第一装置发送的测试图像,其中,测试图像是第一装置从第一播放器或者解码器获取的当前处理的测试图像。
AI识别模块802,用于逐帧对测试图像进行人工智能AI识别,得到每帧测试图像对应的测试结果图像,测试结果图像包括测试图像与对测试图像进行AI识别得到的识别结果信息。
发送模块803,用于向第一装置发送测试结果码流,测试结果码流包括多帧测试结果图像。
在一种可能的设计方式中,该测试装置800还包括:
处理模块,用于逐帧对接收模块获取的多帧原始测试图像进行解压缩、格式转换或者分辨率调整中的至少一种处理后,得到测试码流的多帧测试图像。
本申请实施例还提供另外一种智能摄像头的测试装置,如图9所示,该测试装置900可以包括:
获取模块901,用于获取测试码流,测试码流包括多帧测试图像。
AI识别模块902,用于逐帧对测试码流中的测试图像进行人工智能AI识别,得到每帧测试图像对应的测试结果图像,测试结果图像包括测试图像与对测试图像进行AI识别得到的识别结果信息。
处理模块903,用于逐帧将测试图像与测试图像对应的测试结果图像进行拼接,得到测试图像对应的混合图像。
发送模块904,用于向第一装置发送测试结果码流,测试结果码流包括多帧混合图像。
在一种可能的设计方式中,该测试装置900还可以包括:
解码模块,用于对测试码流进行解码处理和/或解封装处理得到多帧测试图像。
在一种可能的设计方式中,该测试装置900还可以包括:
接收模块,用于接收来自第一装置发送的超文本传输协议HTTP请求信息,其中,HTTP请求信息包括控制操作对应的统一资源定位符URL;AI识别模块,具体还用于基于控制操作对待处理的测试图像进行控制。
在一种可能的设计方式中,HTTP请求信息还包括输入参数,输入参数包括待处理测试图像的帧序列号或者时间戳,输入参数用于指示控制操作对应的待处理测试图像。
在一种可能的设计方式中,发送模块904还用于:向第一装置发送服务器发送事件SSE信息,SSE信息包括第二装置当前处理的测试图像的位置信息或者时间信息,位置信息或者时间信息用于指示第一装置控制测试图像和测试结果图像的显示。
在一种可能的设计方式中,测试码流来自于第一装置的存储系统,或者该测试装置900的网络文件系统NFS,或者第三装置的存储系统。
可以理解的,当测试装置是电子设备时,上述发送模块可以是发送器,可以包括天线和射频电路等,处理模块可以是处理器,例如基带芯片等。当测试装置是具有上述第一装置或者第二装置功能的部件时,发送模块可以是射频单元,处理模块可以是处理器。当装置是芯片系统时,发送模块可以是芯片系统的输出接口、处理模块可以是芯片系统的处理器,例如:中央处理单元(central processing unit,CPU)。
需要说明的是,上述的装置700中具体的执行过程和实施例可以参照上述方法实施例中第一装置执行的步骤和相关的描述,上述的装置800或者装置900中具体的执行过程和实施例可以参照上述方法实施例中第二装置执行的步骤和相关的描述,所解决的技术问题和带来的技术效果也可以参照前述实施例所述的内容,此处不再一一赘述。
在本实施例中,该测试装置以采用集成的方式划分各个功能模块的形式来呈现。这里的“模块”可以指特定电路、执行一个或多个软件或固件程序的处理器和存储器、集成逻辑电路、和/或其他可以提供上述功能的器件。在一个简单的实施例中,本领域的技术人员可以想到该测试装置可以采用如下图10所示的形式。
图10为本申请实施例示出的一种示例性的电子设备1000的结构示意图,该电子设备1000可以为上述实施方式中的第一装置或者第二装置,用于执行上述实施方式中的智能摄像头的测试方法。如图10所示,该电子设备1000可以包括至少一个处理器1001,通信线路1002以及存储器1003。
处理器1001可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个集成电路。
通信线路1002可包括一条通路,在上述组件之间传送信息,该通信线路例如可以是总线。
存储器1003可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过通信线路1002与处理器相连接。存储器也可以和处理器集成在一起。本申请实施例提供的存储器通常为非易失性存储器。其中,存储器1003用于存储执行本申请实施例的方案所涉及的计算机程序指令,并由处理器1001来控制执行。处理器1001用于执行存储器1003中存储的计算机程序指令,从而实现本申请实施例提供的方法。
可选的,本申请实施例中的计算机程序指令也可以称之为应用程序代码,本申请实施例对此不作具体限定。
在具体实现中,作为一种实施例,处理器1001可以包括一个或多个CPU,例如图10中的CPU0和CPU1。
在具体实现中,作为一种实施例,电子设备1000可以包括多个处理器,例如图10中的处理器1001和处理器1007。这些处理器可以是单核(single-CPU)处理器,也可以是多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在具体实现中,作为一种实施例,电子设备1000还可以包括通信接口1004。电子设备可以通过通信接口1004收发数据,或者与其他设备或通信网络通信,该通信接口1004例如可以为以太网接口,无线接入网接口(radio access network,RAN),无线局域网接口(wireless local area networks,WLAN)或者USB接口等。
在具体实现中,作为一种实施例,电子设备1000还可以包括输出设备1005和输入设备1006。输出设备1005和处理器1001通信,可以以多种方式来显示信息。例如,输出设备1005可以是液晶显示器(liquid crystal display,LCD),发光二级管(light emitting diode,LED)显示设备,阴极射线管(cathode ray tube,CRT)显示设备,或投影仪(projector)等。输入设备1006和处理器1001通信,可以以多种方式接收用户的输入。例如,输入设备1006可以是鼠标、键盘、触摸屏设备或传感设备等。
在具体实现中,电子设备1000可以是台式机、便携式电脑、网络服务器、掌上电脑(personal digital assistant,PDA)、移动手机、平板电脑、无线终端设备、嵌入式设备、智能摄像头或有图10中类似结构的设备。本申请实施例不限定电子设备1000的类型,如用于实现上述实施例中第二装置的方法,则电子设备1000需要配置有智能 摄像头。
在一些实施例中,图10中的处理器1001可以通过调用存储器1003中存储的计算机程序指令,使得电子设备1000执行上述方法实施例中的方法。
示例性的,图7、图8或者图9中的各处理模块的功能/实现过程可以通过图10中的处理器1001调用存储器1003中存储的计算机程序指令来实现。例如,图7中的控制模块703和获取模块的功能/实现过程可以通过图10中的处理器1001调用存储器1003中存储的计算机执行指令来实现。图8中的AI识别模块802和处理模块的功能/实现过程可以通过图10中的处理器1001调用存储器1003中存储的计算机执行指令来实现。图9中的获取模块901、AI识别模块902、处理模块903或解码模块的功能/实现过程可以通过图10中的处理器1001调用存储器1003中存储的计算机执行指令来实现。
在示例性实施例中,还提供了一种包括指令的计算机可读存储介质,上述指令可由电子设备1000的处理器1001执行以完成上述实施例的智能摄像头的测试方法。因此其所能获得的技术效果可参考上述方法实施例,在此不再赘述。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式来实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。
图11为本申请实施例提供的一种芯片的结构示意图。芯片1100包括一个或多个处理器1101以及接口电路1102。可选的,所述芯片110还可以包含总线1103。
其中,处理器1101可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1101中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1101可以是通用处理器、数字通信器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
接口电路1102用于数据、指令或者信息的发送或者接收。处理器1101可以利用接口电路1102接收的数据、指令或者其它信息,进行加工,可以将加工完成信息通过接口电路1102发送出去。
可选的,芯片1100还包括存储器,存储器可以包括只读存储器和随机存取存储器,并向处理器提供操作指令和数据。存储器的一部分还可以包括非易失性随机存取存储器(Non-Volatile Random Access Memory,NVRAM)。
可选的,存储器存储了可执行软件模块或者数据结构,处理器可以通过调用存储器存储的操作指令(该操作指令可存储在操作系统中),执行相应的操作。
可选的,芯片1100可以使用在本申请实施例涉及的测试装置(包括第一装置和第二装置)中。可选的,接口电路1102可用于输出处理器1101的执行结果。关于本申请的一个或多个实施例提供的通信方法可参考前述各个实施例,这里不再赘述。
需要说明的,处理器1101、接口电路1102各自对应的功能既可以通过硬件设计 实现,也可以通过软件设计来实现,还可以通过软硬件结合的方式来实现,这里不作限制。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (45)

  1. 一种智能摄像头的测试方法,其特征在于,所述方法包括:
    第一装置将测试码流发送给第二装置,所述测试码流包括多帧测试图像;
    所述第一装置接收来自所述第二装置发送的测试结果码流,所述测试结果码流包括多帧测试结果图像,所述测试结果图像为与所述测试结果图像对应的所述测试图像叠加识别结果信息得到的,所述识别结果信息为所述第二装置对所述测试图像进行人工智能AI识别得到的;
    所述第一装置控制所述测试码流和所述测试结果码流共同显示在显示器上。
  2. 根据权利要求1所述的方法,其特征在于,所述第一装置将测试码流发送给第二装置,具体包括:
    所述第一装置从第一播放器或者解码器逐帧获取所述测试码流的多帧测试图像,所述第一播放器为播放所述测试码流的播放器;
    所述第一装置逐帧将多帧所述测试图像发送给所述第二装置。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一装置将测试码流发送给第二装置,具体包括:
    在第一播放器播放所述测试码流的情况下,所述第一装置从所述第一播放器获取目标测试图像,所述目标测试图像为所述第一播放器当前播放的测试图像;
    所述第一装置将所述目标测试图像发送给所述第二装置。
  4. 根据权利要求2所述的方法,其特征在于,所述第一装置从所述第一播放器或者解码器逐帧获取所述测试码流的多帧测试图像,具体包括:
    所述第一装置获取所述测试码流的多帧原始测试图像,
    所述第一装置逐帧对所述原始测试图像进行图像压缩、格式转换或者分辨率调整中的至少一种处理,得到所述测试码流的所述多帧测试图像。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述第一装置控制所述测试码流和所述测试结果码流共同显示在显示器上,具体包括:
    所述第一装置控制第一播放器播放所述测试码流;
    在所述第一装置接收到来自所述第二装置发送的所述测试结果码流之后,控制第二播放器播放所述测试结果码流。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述第一装置控制所述测试码流和所述测试结果码流共同显示在显示器上,具体包括:
    所述第一装置控制所述测试图像以及与所述测试图像对应的测试结果图像同步显示。
  7. 根据权利要求6所述的方法,其特征在于,所述第一装置控制所述测试图像以及与所述测试图像对应的测试结果图像同步显示,具体包括:
    所述第一装置通过调节第一播放器播放所述测试码流的播放速率,以控制第二播放器播放与所述第一播放器播放的测试图像对应的测试结果图像。
  8. 根据权利要求1所述的方法,其特征在于,所述测试结果码流包括混合图像,所述混合图像为所述测试图像和与所述测试图像对应的测试结果图像的拼接图像。
  9. 根据权利要求8所述的方法,其特征在于,所述第一装置控制所述测试码流和所述测试结果码流共同显示在显示器上,具体包括:
    所述第一装置将所述测试结果码流中的所述混合图像显示在显示器上;
    或者,
    所述第一装置对所述测试结果码流中的所述混合图像逐帧进行图像分割处理,得到每帧测试图像和与所述每帧测试图像对应的测试结果图像;
    所述第一装置控制第一播放器播放所述测试图像的同时,控制第二播放器播放所述测试结果图像。
  10. 根据权利要求8或9所述的方法,其特征在于,所述方法还包括:
    所述第一装置向所述第二装置发送超文本传输协议HTTP请求信息,其中,所述HTTP请求信息包括控制操作对应的统一资源定位符URL,所述控制操作用于指示所述第二装置对待处理的测试图像的控制。
  11. 根据权利要求10所述的方法,其特征在于,所述HTTP请求信息还包括输入参数,所述输入参数包括所述待处理的测试图像的帧序列号或者时间戳,所述输入参数用于指示所述控制操作对应的所述待处理的测试图像。
  12. 根据权利要求8或9所述的方法,其特征在于,所述第一装置控制所述测试码流和所述测试结果码流共同显示在显示器上,具体包括:
    所述第一装置接收来自所述第二装置发送的服务器发送事件SSE信息,所述SSE信息包括所述第二装置当前处理的测试图像的位置信息或者时间信息;
    所述第一装置根据所述位置信息或者所述时间信息控制当前显示的测试图像和测试结果图像。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述测试码流来自于所述第一装置的存储系统,或者所述第一装置的网络文件系统NFS,或者第三装置的存储系统。
  14. 一种智能摄像头的测试方法,其特征在于,所述方法包括:
    第二装置逐帧接收来自第一装置发送的测试图像,其中,所述测试图像是所述第一装置从第一播放器或者解码器获取的当前处理的测试图像;
    所述第二装置逐帧对所述测试图像进行人工智能AI识别,得到所述每帧测试图像对应的测试结果图像,所述测试结果图像包括测试图像和对测试图像进行AI识别得到的识别结果信息;
    所述第二装置向所述第一装置发送测试结果码流,所述测试结果码流包括多帧测试结果图像。
  15. 根据权利要求14所述的方法,其特征在于,所述第二装置逐帧对所述测试图像进行人工智能AI识别之前,所述方法还包括:
    所述第二装置获取测试码流中的多帧原始测试图像;
    所述第二装置逐帧对所述原始测试图像进行解压缩、格式转换或者分辨率调整中的至少一种处理后,得到所述测试码流的多帧测试图像。
  16. 一种智能摄像头的测试方法,其特征在于,所述方法包括:
    第二装置获取测试码流,所述测试码流包括多帧测试图像;
    所述第二装置逐帧对所述测试码流中的测试图像进行人工智能AI识别,得到所述每帧测试图像对应的测试结果图像,所述测试结果图像为测试图像与识别结果信息叠加得到的;
    所述第二装置逐帧将所述测试图像与所述测试图像对应的测试结果图像进行拼接,得到所述测试图像对应的混合图像;
    所述第二装置向第一装置发送测试结果码流,所述测试结果码流包括多帧所述混合图像。
  17. 根据权利要求16所述的方法,其特征在于,所述第二装置逐帧对所述测试码流中的测试图像进行人工智能AI识别之前,所述方法还包括:
    所述第二装置对所述测试码流进行解码处理和/或解封装处理得到多帧测试图像。
  18. 根据权利要求16或17所述的方法,其特征在于,所述方法还包括:
    所述第二装置接收来自所述第一装置发送的超文本传输协议HTTP请求信息,其中,所述HTTP请求信息包括控制操作对应的统一资源定位符URL;
    所述第二装置基于所述控制操作对待处理的测试图像进行控制。
  19. 根据权利要求18所述的方法,其特征在于,所述HTTP请求信息还包括输入参数,所述输入参数包括所述待处理的测试图像的帧序列号或者时间戳,所述输入参数用于指示所述控制操作对应的所述待处理的测试图像。
  20. 根据权利要求16或17所述的方法,其特征在于,所述方法还包括:
    所述第二装置向所述第一装置发送服务器发送事件SSE信息,所述SSE信息包括所述第二装置当前处理的测试图像的位置信息或者时间信息,所述位置信息或者所述时间信息用于指示所述第一装置控制所述测试图像和所述测试结果图像的显示。
  21. 根据权利要求16-20任一项所述的方法,其特征在于,所述测试码流来自于所述第一装置的存储系统,或者所述第二装置的网络文件系统NFS,或者第三装置的存储系统。
  22. 一种智能摄像头的测试装置,其特征在于,所述测试装置包括:
    发送模块,用于将测试码流发送给第二装置,所述测试码流包括多帧测试图像;
    接收模块,用于接收来自所述第二装置发送的测试结果码流,所述测试结果码流包括多帧测试结果图像,所述测试结果图像为与所述测试结果图像对应的所述测试图像叠加识别结果信息得到的,所述识别结果信息为所述第二装置对所述测试图像进行人工智能AI识别得到的;
    控制模块,用于控制所述测试码流和所述测试结果码流共同显示在显示器上。
  23. 根据权利要求22所述的测试装置,其特征在于,所述测试装置还包括:
    获取模块,用于从第一播放器或者解码器逐帧获取所述测试码流的多帧测试图像,所述第一播放器为播放所述测试码流的播放器;
    所述发送模块,具体用于逐帧将多帧所述测试图像发送给所述第二装置。
  24. 根据权利要求22或23所述的测试装置,其特征在于,在第一播放器播放 所述测试码流的情况下,获取模块具体用于:
    从所述第一播放器获取目标测试图像,所述目标测试图像为所述第一播放器当前播放的测试图像;
    所述发送模块,具体用于将所述目标测试图像发送给所述第二装置。
  25. 根据权利要求23所述的测试装置,其特征在于,所述获取模块,具体用于:
    从所述第一播放器或所述解码器获取所述测试码流的多帧原始测试图像;
    逐帧对所述原始测试图像进行图像压缩、格式转换或者分辨率调整中的至少一种处理后,得到所述测试码流的所述多帧测试图像。
  26. 根据权利要求22-25任一项所述的测试装置,其特征在于,所述控制模块具体用于:
    控制第一播放器播放所述测试码流;
    在接收到来自所述第二装置发送的所述测试结果码流之后,控制第二播放器播放所述测试结果码流。
  27. 根据权利要求22-26任一项所述的测试装置,其特征在于,所述控制模块具体用于:
    控制所述测试图像以及与所述测试图像对应的测试结果图像同步显示。
  28. 根据权利要求27所述的测试装置,其特征在于,所述控制模块具体用于:
    通过调节第一播放器播放所述测试码流的播放速率,控制第二播放器播放与所述第一播放器播放的测试图像对应的测试结果图像。
  29. 根据权利要求22所述的测试装置,其特征在于,所述测试结果码流包括混合图像,所述混合图像为所述测试图像和与所述测试图像对应的测试结果图像的拼接图像。
  30. 根据权利要求29所述的测试装置,其特征在于,所述控制模块具体还用于:
    将所述测试结果码流中的所述混合图像显示在显示器上;
    或者,
    对所述测试结果码流中的所述混合图像逐帧进行图像分割处理,得到每帧测试图像和与所述每帧测试图像对应的测试结果图像;
    控制第一播放器播放所述测试图像的同时,控制第二播放器播放所述测试结果图像。
  31. 根据权利要求29或30所述的测试装置,其特征在于,所述发送模块还用于:
    向所述第二装置发送超文本传输协议HTTP请求信息,其中,所述HTTP请求信息包括控制操作对应的统一资源定位符URL,所述控制操作用于指示所述第二装置对待处理的测试图像的控制。
  32. 根据权利要求31所述的测试装置,其特征在于,所述HTTP请求信息还包括输入参数,所述输入参数包括所述待处理的测试图像的帧序列号或者时间戳,所述输入参数用于指示所述控制操作对应的所述待处理的测试图像。
  33. 根据权利要求29或30所述的测试装置,其特征在于,所述接收模块还用 于:
    接收来自所述第二装置发送的服务器发送事件SSE信息,所述SSE信息包括所述第二装置当前处理的测试图像的位置信息或者时间信息;
    所述控制模块,具体还用于:根据所述位置信息或者所述时间信息控制当前显示的测试图像和测试结果图像。
  34. 根据权利要求22-33任一项所述的测试装置,其特征在于,所述测试码流来自于所述测试装置的存储系统,或者所述测试装置的网络文件系统NFS,或者第三装置的存储系统。
  35. 一种智能摄像头的测试装置,其特征在于,所述测试装置包括:
    接收模块,用于逐帧接收来自第一装置发送的测试图像,其中,所述测试图像是所述第一装置从第一播放器或者解码器获取的当前处理的测试图像;
    AI识别模块,用于逐帧对所述测试图像进行人工智能AI识别,得到所述每帧测试图像对应的测试结果图像,所述测试结果图像包括测试图像与对测试图像进行AI识别得到的识别结果信息;
    发送模块,用于向所述第一装置发送测试结果码流,所述测试结果码流包括多帧测试结果图像。
  36. 根据权利要求35所述的测试装置,其特征在于,所述测试装置还包括:
    处理模块,用于逐帧对所述接收模块获取的多帧原始测试图像进行解压缩、格式转换或者分辨率调整中的至少一种处理后,得到测试码流的多帧测试图像。
  37. 一种智能摄像头的测试装置,其特征在于,所述测试装置包括:
    获取模块,用于获取测试码流,所述测试码流包括多帧测试图像;
    AI识别模块,用于逐帧对所述测试码流中的测试图像进行人工智能AI识别,得到所述每帧测试图像对应的测试结果图像,所述测试结果图像包括测试图像与对测试图像进行AI识别得到的识别结果信息;
    处理模块,用于逐帧将所述测试图像与所述测试图像对应的测试结果图像进行拼接,得到所述测试图像对应的混合图像;
    发送模块,用于向第一装置发送测试结果码流,所述测试结果码流包括多帧所述混合图像。
  38. 根据权利要求37所述的测试装置,其特征在于,所述测试装置还包括解码模块,用于:
    对所述测试码流进行解码处理和/或解封装处理得到多帧测试图像。
  39. 根据权利要求37或38所述的测试装置,其特征在于,所述测试装置还包括:
    接收模块,用于接收来自所述第一装置发送的超文本传输协议HTTP请求信息,其中,所述HTTP请求信息包括控制操作对应的统一资源定位符URL;
    所述AI识别模块,具体还用于基于所述控制操作对待处理的测试图像进行控制。
  40. 根据权利要求39所述的测试装置,其特征在于,所述HTTP请求信息还包 括输入参数,所述输入参数包括所述待处理的测试图像的帧序列号或者时间戳,所述输入参数用于指示所述控制操作对应的所述待处理的测试图像。
  41. 根据权利要求37或38所述的测试装置,其特征在于,所述发送模块还用于:
    向所述第一装置发送服务器发送事件SSE信息,所述SSE信息包括所述测试装置当前处理的测试图像的位置信息或者时间信息,所述位置信息或者所述时间信息用于指示所述第一装置控制所述测试图像和测试结果图像的显示。
  42. 根据权利要求37-41任一项所述的测试装置,其特征在于,所述测试码流来自于所述第一装置的存储系统,或者所述测试装置的网络文件系统NFS,或者第三装置的存储系统。
  43. 一种智能摄像头的测试装置,其特征在于,包括:
    处理器和传输接口;
    其中,所述处理器被配置为执行存储在存储器中的指令,以实现如权利要求1-13,14-15,或16-21中任一项所述的智能摄像头的测试方法。
  44. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令由计算机或处理器执行时,使得所述计算机或所述处理器能够执行如权利要求1-13,14-15,或16-21中任一项所述的智能摄像头的测试方法。
  45. 一种计算机程序产品,当所述计算机程序产品在计算机或处理器上运行时,使得所述计算机或所述处理器执行如权利要求1-13,14-15,或16-21中任一项所述的智能摄像头的测试方法。
PCT/CN2020/087633 2020-04-28 2020-04-28 一种智能摄像头的测试方法及装置 WO2021217467A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080100243.6A CN115516431A (zh) 2020-04-28 2020-04-28 一种智能摄像头的测试方法及装置
PCT/CN2020/087633 WO2021217467A1 (zh) 2020-04-28 2020-04-28 一种智能摄像头的测试方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/087633 WO2021217467A1 (zh) 2020-04-28 2020-04-28 一种智能摄像头的测试方法及装置

Publications (1)

Publication Number Publication Date
WO2021217467A1 true WO2021217467A1 (zh) 2021-11-04

Family

ID=78331709

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/087633 WO2021217467A1 (zh) 2020-04-28 2020-04-28 一种智能摄像头的测试方法及装置

Country Status (2)

Country Link
CN (1) CN115516431A (zh)
WO (1) WO2021217467A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071132A (zh) * 2022-01-11 2022-02-18 浙江华睿科技股份有限公司 一种信息延时的检测方法、装置、设备及可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101656891A (zh) * 2009-08-27 2010-02-24 北京中盾安全技术开发公司 智能视频监控系统及其算法的性能测试方法
US20140211021A1 (en) * 2013-01-25 2014-07-31 Samsung Electronics Co., Ltd. Test system for evaluating mobile device and driving method thereof
CN104063313A (zh) * 2014-04-15 2014-09-24 深圳英飞拓科技股份有限公司 一种智能分析算法的测试系统及方法
KR101643713B1 (ko) * 2015-08-06 2016-08-11 주식회사 이오비스 학습형 스마트 카메라를 이용한 검사 대상 물품의 검사방법
CN110031697A (zh) * 2019-03-07 2019-07-19 北京旷视科技有限公司 目标识别设备的测试方法、装置、系统和计算机可读介质
CN110427962A (zh) * 2019-06-20 2019-11-08 厦门网宿有限公司 一种测试方法、电子设备及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101656891A (zh) * 2009-08-27 2010-02-24 北京中盾安全技术开发公司 智能视频监控系统及其算法的性能测试方法
US20140211021A1 (en) * 2013-01-25 2014-07-31 Samsung Electronics Co., Ltd. Test system for evaluating mobile device and driving method thereof
CN104063313A (zh) * 2014-04-15 2014-09-24 深圳英飞拓科技股份有限公司 一种智能分析算法的测试系统及方法
KR101643713B1 (ko) * 2015-08-06 2016-08-11 주식회사 이오비스 학습형 스마트 카메라를 이용한 검사 대상 물품의 검사방법
CN110031697A (zh) * 2019-03-07 2019-07-19 北京旷视科技有限公司 目标识别设备的测试方法、装置、系统和计算机可读介质
CN110427962A (zh) * 2019-06-20 2019-11-08 厦门网宿有限公司 一种测试方法、电子设备及计算机可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071132A (zh) * 2022-01-11 2022-02-18 浙江华睿科技股份有限公司 一种信息延时的检测方法、装置、设备及可读存储介质

Also Published As

Publication number Publication date
CN115516431A (zh) 2022-12-23

Similar Documents

Publication Publication Date Title
US20220263885A1 (en) Adaptive media streaming method and apparatus according to decoding performance
US10567809B2 (en) Selective media playing method and apparatus according to live streaming and recorded streaming
WO2021114708A1 (zh) 多人视频直播业务实现方法、装置、计算机设备
CN108337560B (zh) 用于在web浏览器上播放媒体的媒体重放设备和媒体服务设备
CN108337545B (zh) 用于同步再现视频和音频的媒体重放设备和媒体服务设备
US20190184284A1 (en) Method of transmitting video frames from a video stream to a display and corresponding apparatus
CN108337246B (zh) 防止重放延迟的媒体重放设备和媒体服务设备
KR101942269B1 (ko) 웹 브라우저에서 미디어를 재생하고 탐색하는 장치 및 방법
US11694316B2 (en) Method and apparatus for determining experience quality of VR multimedia
CN114040251A (zh) 音视频播放方法、系统、存储介质及计算机程序产品
WO2024001661A1 (zh) 视频合成方法、装置、设备和存储介质
WO2021217467A1 (zh) 一种智能摄像头的测试方法及装置
US11089381B2 (en) Apparatus and method for simultaneous playback and backup of media in a web browser
CN114302176A (zh) 视频播放方法及装置
CN112714341B (zh) 信息获取方法、云化机顶盒系统、实体机顶盒及存储介质
CN116347158A (zh) 视频播放的方法、装置、电子设备及计算机可读存储介质
CN111356009B (zh) 音频数据的处理方法、装置、存储介质以及终端
CN103139610A (zh) 集群视频同步播放的方法和装置
CN103618968A (zh) 一种面向云环境下的网络电视播放方法及系统
CN114827742B (zh) 直播互动方法、装置、计算机设备及计算机可读介质
US20170025089A1 (en) Devices and methods for facilitating transmission of video streams in remote display applications
CN117834963A (zh) 一种显示设备及流媒体的播放方法
CN117425048A (zh) 视频播放方法、装置、设备以及存储介质
CN116744051A (zh) 一种显示设备和字幕生成方法
CN112887755A (zh) 用于播放视频的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933146

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933146

Country of ref document: EP

Kind code of ref document: A1