CN111432205A - Automatic testing method for video synthesis correctness - Google Patents

Automatic testing method for video synthesis correctness Download PDF

Info

Publication number
CN111432205A
CN111432205A CN202010304620.XA CN202010304620A CN111432205A CN 111432205 A CN111432205 A CN 111432205A CN 202010304620 A CN202010304620 A CN 202010304620A CN 111432205 A CN111432205 A CN 111432205A
Authority
CN
China
Prior art keywords
video
picture
verified
frame
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010304620.XA
Other languages
Chinese (zh)
Other versions
CN111432205B (en
Inventor
姜珑珠
顾湘余
张子慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Quwei Science & Technology Co ltd
Original Assignee
Hangzhou Quwei Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Quwei Science & Technology Co ltd filed Critical Hangzhou Quwei Science & Technology Co ltd
Priority to CN202010304620.XA priority Critical patent/CN111432205B/en
Publication of CN111432205A publication Critical patent/CN111432205A/en
Application granted granted Critical
Publication of CN111432205B publication Critical patent/CN111432205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses an automatic testing method for video synthesis correctness. When the same material resources are used, frames are taken for the newly synthesized video and the verified video, the RGB value of each frame is taken pixel by pixel for verification, whether the newly synthesized video meets the expectation is automatically verified, and an Allure report is generated according to all video verification results. The invention has the beneficial effects that: obtaining an accurate comparison result; the comparison similarity can be flexibly set; the problems are convenient to view and analyze; results are displayed in the Allure report for ease of access; the whole process is automatically executed, only one key is needed to start construction, and manpower is liberated.

Description

Automatic testing method for video synthesis correctness
Technical Field
The invention relates to the technical field related to video processing, in particular to an automatic testing method for video synthesis correctness.
Background
Automatically verifying whether the actual effect of the derived video is in line with the expectation is always an important research content for automatically testing the video synthesis correctness. The effect verification after video export is often required to be carried out on the change or optimization upgrade of an audio and video engine, and whether two videos are completely consistent or not is artificially judged, so that the method is too subjective and is easy to generate errors.
At present, people increasingly pay attention to the presentation of video effects, but no mature video comparison technology exists at present, and the two videos cannot be automatically compared and analyzed in detail and comparison results are output. If the related video content is too rich, the color transformation is rich, the applied transition/special effect/material is complex, the number of videos needing to be synthesized and compared is large, and the access of the videos needing to be compared is slow, the problem of inaccurate comparison result still occurs under the condition of consuming more manpower and time.
Disclosure of Invention
The invention provides a labor-saving and high-accuracy automatic testing method for video synthesis correctness, aiming at overcoming the defects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
an automatic test method for video synthesis correctness specifically comprises the following steps:
(1) uploading the same pictures and video resource materials used by the verified video to the mobile phone, importing the same pictures or video resources according to the same sequence and manufacturing a template video;
(2) exporting the manufactured template video to a mobile phone album, acquiring a video path, and uploading the video to a cloud server;
(3) splicing the corresponding verified video path on the cloud server according to the template video name, and printing the newly synthesized video path and the verified video path into an Allure test report;
(4) respectively carrying out frame taking on the newly synthesized video and the verified video, and setting frame taking interval seconds;
(5) comparing the RGB value of each pixel in the frame picture taken out at intervals of seconds, marking different pixel points by using red, and storing the difference picture to a cloud server;
(6) calculating the matching rate of all pixels of the two pictures, if the matching rate is less than the set similarity, determining that the two pictures are inconsistent, and if the matching rate is greater than the set similarity, determining that the two pictures are the same;
(7) and if the matching rate of all the frame taking pictures is greater than the set similarity, the two videos are the same, and an Allure test report is output.
The method is mainly used for taking frames of a newly synthesized video and a verified video when the same material resources are used, taking the RGB value of each frame pixel by pixel for verification, automatically verifying whether the newly synthesized video meets expectations or not, and generating an Allure report according to all video verification results. The method and the system automatically derive a new video and acquire a checked video from the Ali cloud server for frame taking comparison to obtain an accurate comparison result; the comparison similarity can be flexibly set; the different local marks are displayed in red, so that the problems are conveniently checked and analyzed; the similarity of each frame of picture, the checked video link, the new video link and the frame taking difference picture screenshot are all displayed in an Allure report for convenient access; the whole process of video export, video uploading, video comparison, result output and report generation is automatically executed, construction is started only by one key, and manpower is liberated.
Preferably, in the step (3), the verified video refers to a standard video which is determined to be qualified through manual inspection, and the standard video is stored in an appointed path of the cloud server, namely the appointed path is added with the template video name to position a video, so that the appointed path of the corresponding verified video on the cloud server is spliced according to the template video name; the new composite video refers to: and when the automatic test is executed each time, synthesizing a new template video with the template special effect by using the template and the original video material through a new package, and executing the video run out of the case each time.
Preferably, in the step (4), the specific operation method is as follows:
(41) the link of the new synthesized video uploaded to the cloud server is UR L1, the link of the verified video corresponding to the new synthesized video in the cloud server is UR L2, the predicted video duration, the frame taking interval seconds and the preset similarity are taken as parameters and are transmitted into a compositFrames () function, and the function calls a getFrames () function to perform frame taking operation on the new synthesized video and the verified video respectively;
(42) using frame taking service of a cloud server, carrying out frame taking screenshot on a video according to the predicted video duration and frame taking interval seconds, storing the intercepted image into a list L ist < InputStream >, and simultaneously recording all frame taking screenshots, frame taking time points and a frame taking screenshot cloud server url address in a VideoFrames class;
(43) and then, introducing the new synthesized video VideoFrames object, the verified video VideoFrames object and the preset similarity obtained in the previous frame taking process into a compleFrames () function to perform video frame comparison operation.
Preferably, in the step (5), the specific operation method is as follows:
(51) respectively taking a frame-taking screenshot list in a new synthesized video and a checked video object, circularly traversing the screenshots in the list, and transmitting the two screenshots at corresponding time points and preset similarity as parameters into a CompareAndMarkDiff () function to compare the two screenshots and mark difference points;
(52) creating 3 buffered images objects which are respectively a source picture, an expected picture and a difference picture, presetting the maximum pixel difference value of single dimensionality of Red, Green and Blue, and modifying the expected picture size by using a changeImageSize () function to adapt to the source picture size;
(53) circularly traversing screenshots in the list, obtaining RGB values of each pixel point of a source picture and each pixel point of an expected picture by using a getRGB () function, then calling getRed (), getGreen () and getBlue () functions to respectively obtain pixel values of the pixel point of the source picture and the pixel point of the expected picture in a red, green and blue single dimension, and obtaining a pixel difference value of the single dimension by using an abs () function;
(54) if the pixel difference value of any single dimension is larger than the preset difference value, the pixel point is judged to be inconsistent, a setRGB () function is called to mark the pixel point of the difference picture with red, and the number of the difference points is increased by 1.
Preferably, in the step (6), the specific operation method is as follows: and (4) storing the difference pictures, calculating the matching rate, multiplying the width and the height of the source pictures to obtain the total pixel point number, subtracting the difference point number from the total pixel point number, and dividing the difference point number by the total pixel point number to obtain the matching rate of the two pictures.
The invention has the beneficial effects that: obtaining an accurate comparison result; the comparison similarity can be flexibly set; the problems are convenient to view and analyze; results are displayed in the Allure report for ease of access; the whole process is automatically executed, only one key is needed to start construction, and manpower is liberated.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The invention is further described with reference to the following figures and detailed description.
In the embodiment shown in fig. 1, an automated testing method for video composition correctness specifically includes the following steps:
(1) uploading the same pictures and video resource materials used by the verified video to the mobile phone, importing the same pictures or video resources according to the same sequence and manufacturing a template video;
(2) exporting the manufactured template video to a mobile phone album, acquiring a video path, and uploading the video to a cloud server;
(3) splicing the corresponding verified video path on the cloud server according to the template video name, and printing the newly synthesized video path and the verified video path into an Allure test report; the verified video refers to a standard video which is determined to be qualified through manual inspection, and the standard video is stored in an appointed path of the cloud server, namely the appointed path and the template video name can be used for positioning one video, so that the appointed path of the corresponding verified video on the cloud server is spliced according to the template video name; the new composite video refers to: and when the automatic test is executed each time, synthesizing a new template video with the template special effect by using the template and the original video material through a new package, and executing the video run out of the case each time.
(4) Respectively carrying out frame taking on the newly synthesized video and the verified video, and setting frame taking interval seconds; the specific operation method comprises the following steps:
(41) after the new synthesized video is uploaded to the cloud server successfully, the link is UR L1, the link of the corresponding verified video in the cloud server is UR L2, the predicted video time length is 8, the frame taking interval second number is 2, and the preset similarity of 99% is taken as parameters and transmitted into a compositFrames () function, and the functions respectively call a getFrames () function to carry out frame taking operation on the new synthesized video and the verified video;
(42) using frame taking service of a cloud server, respectively carrying out frame taking screenshot on a video at 0, 2, 4, 6 and 8 seconds according to the predicted video time length and the frame taking interval seconds, storing the intercepted image into a list L ist < InputStream >, and simultaneously recording all frame taking screenshots, frame taking time points and a frame taking screenshot cloud server url address in a VideoFrames class;
(43) and then, introducing the new synthesized video VideoFrames object, the verified video VideoFrames object and the preset similarity obtained in the previous frame taking process into a compleFrames () function to perform video frame comparison operation.
(5) Comparing the RGB value of each pixel in the frame picture taken out at intervals of seconds, marking different pixel points by using red, and storing the difference picture to a cloud server; the specific operation method comprises the following steps:
(51) respectively taking a frame-taking screenshot list in a new synthesized video and a checked video object, circularly traversing the screenshots in the list, and transmitting two screenshots (expected and actual) at corresponding time points and a preset similarity of 99% serving as parameters into a CompareAndMarkDiff () function to compare the two screenshots and mark difference points;
(52) creating 3 buffered images objects which are respectively a source picture, an expected picture and a difference picture, presetting the maximum pixel difference value of single dimensionality of Red, Green and Blue, and modifying the expected picture size by using a changeImageSize () function to adapt to the source picture size;
(53) circularly traversing screenshots in the list, obtaining RGB values of each pixel point of a source picture and each pixel point of an expected picture by using a getRGB () function, then calling getRed (), getGreen () and getBlue () functions to respectively obtain pixel values of the pixel point of the source picture and the pixel point of the expected picture in a red, green and blue single dimension, and obtaining a pixel difference value of the single dimension by using an abs () function;
(54) if the pixel difference value of any single dimension is larger than the preset difference value, the pixel point is judged to be inconsistent, a setRGB () function is called to mark the pixel point of the difference picture with red, and the number of the difference points is increased by 1.
Wherein: the difference picture refers to that the checked video and the new synthesized video are compared frame by frame to find out the difference pixel points, then the frame picture of the original video material is taken, the difference pixel points are marked on the frame picture, and the picture with the difference pixel points is marked.
(6) Calculating the matching rate of all pixels of the two pictures, if the matching rate is less than the set similarity, determining that the two pictures are inconsistent, and if the matching rate is greater than the set similarity, determining that the two pictures are the same; the specific operation method comprises the following steps: storing the difference pictures, calculating the matching rate, multiplying the width and the height of the source pictures to obtain the number of total pixel points, subtracting the number of difference points from the number of total pixel points, dividing the difference points by the number of total pixel points to obtain the matching rate of the two pictures, if the matching rate is less than the preset similarity, judging that the two pictures are inconsistent, otherwise, judging that the two pictures are the same, and setting the result to an ImageCompareRes picture comparison result object: true for true and false for false.
(7) If the matching rate of all the frame taking pictures is greater than the set similarity, the two videos are the same, and an Allure test report is output; if true, printing 'verifying whether the similarity of the x second frame of the video meets the expectation' according to the result in the ImageCompareRes object in the Allure report; if false, print "video xth second frame similarity is lower than expected".
The method is mainly used for taking frames of a newly synthesized video and a verified video when the same material resources are used, taking the RGB value of each frame pixel by pixel for verification, automatically verifying whether the newly synthesized video meets expectations or not, and generating an Allure report according to all video verification results. The method and the system automatically derive a new video and acquire a checked video from the Ali cloud server for frame taking comparison to obtain an accurate comparison result; the comparison similarity can be flexibly set; the different local marks are displayed in red, so that the problems are conveniently checked and analyzed; the similarity of each frame of picture, the checked video link, the new video link and the frame taking difference picture screenshot are all displayed in an Allure report for convenient access; the whole process of video export, video uploading, video comparison, result output and report generation is automatically executed, construction is started only by one key, and manpower is liberated.

Claims (5)

1. An automatic test method for video synthesis correctness is characterized by comprising the following steps:
(1) uploading the same pictures and video resource materials used by the verified video to the mobile phone, importing the same pictures or video resources according to the same sequence and manufacturing a template video;
(2) exporting the manufactured template video to a mobile phone album, acquiring a video path, and uploading the video to a cloud server;
(3) splicing the corresponding verified video path on the cloud server according to the template video name, and printing the newly synthesized video path and the verified video path into an Allure test report;
(4) respectively carrying out frame taking on the newly synthesized video and the verified video, and setting frame taking interval seconds;
(5) comparing the RGB value of each pixel in the frame picture taken out at intervals of seconds, marking different pixel points by using red, and storing the difference picture to a cloud server;
(6) calculating the matching rate of all pixels of the two pictures, if the matching rate is less than the set similarity, determining that the two pictures are inconsistent, and if the matching rate is greater than the set similarity, determining that the two pictures are the same;
(7) and if the matching rate of all the frame taking pictures is greater than the set similarity, the two videos are the same, and an Allure test report is output.
2. The method according to claim 1, wherein in the step (3), the verified video refers to a standard video that is determined to be qualified through manual inspection, and the standard video is stored in an assigned path of the cloud server, that is, the assigned path plus the name of the template video can locate a video, so that the corresponding verified video assigned path on the cloud server is spliced according to the name of the template video; the new composite video refers to: and when the automatic test is executed each time, synthesizing a new template video with the template special effect by using the template and the original video material through a new package, and executing the video run out of the case each time.
3. The method for automatically testing the correctness of video composition as claimed in claim 1 or 2, wherein in the step (4), the specific operation method is as follows:
(41) the link of the new synthesized video uploaded to the cloud server is UR L1, the link of the verified video corresponding to the new synthesized video in the cloud server is UR L2, the predicted video duration, the frame taking interval seconds and the preset similarity are taken as parameters and are transmitted into a compositFrames () function, and the function calls a getFrames () function to perform frame taking operation on the new synthesized video and the verified video respectively;
(42) using frame taking service of a cloud server, carrying out frame taking screenshot on a video according to the predicted video duration and frame taking interval seconds, storing the intercepted image into a list L ist < InputStream >, and simultaneously recording all frame taking screenshots, frame taking time points and a frame taking screenshot cloud server url address in a VideoFrames class;
(43) and then, introducing the new synthesized video VideoFrames object, the verified video VideoFrames object and the preset similarity obtained in the previous frame taking process into a compleFrames () function to perform video frame comparison operation.
4. The method according to claim 3, wherein in the step (5), the specific operation method is as follows:
(51) respectively taking a frame-taking screenshot list in a new synthesized video and a checked video object, circularly traversing the screenshots in the list, and transmitting the two screenshots at corresponding time points and preset similarity as parameters into a CompareAndMarkDiff () function to compare the two screenshots and mark difference points;
(52) creating 3 buffered images objects which are respectively a source picture, an expected picture and a difference picture, presetting the maximum pixel difference value of single dimensionality of Red, Green and Blue, and modifying the expected picture size by using a changeImageSize () function to adapt to the source picture size;
(53) circularly traversing screenshots in the list, obtaining RGB values of each pixel point of a source picture and each pixel point of an expected picture by using a getRGB () function, then calling getRed (), getGreen () and getBlue () functions to respectively obtain pixel values of the pixel point of the source picture and the pixel point of the expected picture in a red, green and blue single dimension, and obtaining a pixel difference value of the single dimension by using an abs () function;
(54) if the pixel difference value of any single dimension is larger than the preset difference value, the pixel point is judged to be inconsistent, a setRGB () function is called to mark the pixel point of the difference picture with red, and the number of the difference points is increased by 1.
5. The method according to claim 4, wherein in the step (6), the specific operation method is as follows: and (4) storing the difference pictures, calculating the matching rate, multiplying the width and the height of the source pictures to obtain the total pixel point number, subtracting the difference point number from the total pixel point number, and dividing the difference point number by the total pixel point number to obtain the matching rate of the two pictures.
CN202010304620.XA 2020-04-17 2020-04-17 Automatic testing method for video synthesis correctness Active CN111432205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010304620.XA CN111432205B (en) 2020-04-17 2020-04-17 Automatic testing method for video synthesis correctness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010304620.XA CN111432205B (en) 2020-04-17 2020-04-17 Automatic testing method for video synthesis correctness

Publications (2)

Publication Number Publication Date
CN111432205A true CN111432205A (en) 2020-07-17
CN111432205B CN111432205B (en) 2021-10-26

Family

ID=71553968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010304620.XA Active CN111432205B (en) 2020-04-17 2020-04-17 Automatic testing method for video synthesis correctness

Country Status (1)

Country Link
CN (1) CN111432205B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131113A (en) * 2020-09-23 2020-12-25 北京达佳互联信息技术有限公司 Method for automatically testing special effect and electronic equipment
CN113077534A (en) * 2021-03-22 2021-07-06 上海哔哩哔哩科技有限公司 Picture synthesis cloud platform and picture synthesis method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004208117A (en) * 2002-12-26 2004-07-22 Nec Engineering Ltd Image quality deterioration detection system when compositing caption
CN108900830A (en) * 2018-05-25 2018-11-27 南京理工大学 Verify the platform that Infrared video image Processing Algorithm realizes accuracy
CN110730381A (en) * 2019-07-12 2020-01-24 北京达佳互联信息技术有限公司 Method, device, terminal and storage medium for synthesizing video based on video template

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004208117A (en) * 2002-12-26 2004-07-22 Nec Engineering Ltd Image quality deterioration detection system when compositing caption
CN108900830A (en) * 2018-05-25 2018-11-27 南京理工大学 Verify the platform that Infrared video image Processing Algorithm realizes accuracy
CN110730381A (en) * 2019-07-12 2020-01-24 北京达佳互联信息技术有限公司 Method, device, terminal and storage medium for synthesizing video based on video template

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131113A (en) * 2020-09-23 2020-12-25 北京达佳互联信息技术有限公司 Method for automatically testing special effect and electronic equipment
CN113077534A (en) * 2021-03-22 2021-07-06 上海哔哩哔哩科技有限公司 Picture synthesis cloud platform and picture synthesis method
CN113077534B (en) * 2021-03-22 2023-11-28 上海哔哩哔哩科技有限公司 Picture synthesis cloud platform and picture synthesis method

Also Published As

Publication number Publication date
CN111432205B (en) 2021-10-26

Similar Documents

Publication Publication Date Title
US20200167612A1 (en) Image processing method and apparatus
CN111432205B (en) Automatic testing method for video synthesis correctness
CN109413417B (en) System and method for detecting interactive television service quality
CN105389809B (en) Display performance testing method, system and device
WO2008092345A1 (en) A method and apparatus for generating test script, a method, apparatus and system for checking test
CN103164443A (en) Method and device of picture merging
CN110879780A (en) Page abnormity detection method and device, electronic equipment and readable storage medium
CN112288716A (en) Steel coil bundling state detection method, system, terminal and medium
WO2023236371A1 (en) Visual analysis method for cable element identification
CN105979152A (en) Smart shooting system
CN106972983B (en) Automatic testing device and method for network interface
CN108230227B (en) Image tampering identification method and device and electronic equipment
CN111736893B (en) Software package version verification method and related device
CN117078152A (en) Image recognition-based quotient zero precision signing method and system for tobacco industry
CN112837640A (en) Screen dynamic picture testing method, system, electronic equipment and storage medium
CN116152609B (en) Distributed model training method, system, device and computer readable medium
CN107783856A (en) A kind of method of testing and system of image processor parameter
CN116596903A (en) Defect identification method, device, electronic equipment and readable storage medium
CN116089277A (en) Neural network operator test and live broadcast application method and device, equipment and medium thereof
CN112614059B (en) Detection method and device of transformer substation, storage medium and processor
CN115456984A (en) High-speed image recognition defect detection system based on two-dimensional code
WO2020192399A1 (en) Method and apparatus for fast analysis of two-dimensional code image
CN112434548B (en) Video labeling method and device
CN113408475A (en) Indication signal recognition method, indication signal recognition apparatus, and computer storage medium
CN111538658A (en) Automatic testing method for interface loading duration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 22nd floor, block a, Huaxing Times Square, 478 Wensan Road, Xihu District, Hangzhou, Zhejiang 310000

Applicant after: Hangzhou Xiaoying Innovation Technology Co.,Ltd.

Address before: 16 / F, HANGGANG Metallurgical Science and technology building, 294 Tianmushan Road, Xihu District, Hangzhou City, Zhejiang Province, 310012

Applicant before: HANGZHOU QUWEI SCIENCE & TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant