CN115278209A - Camera test system based on intelligent walking robot - Google Patents

Camera test system based on intelligent walking robot Download PDF

Info

Publication number
CN115278209A
CN115278209A CN202210662788.7A CN202210662788A CN115278209A CN 115278209 A CN115278209 A CN 115278209A CN 202210662788 A CN202210662788 A CN 202210662788A CN 115278209 A CN115278209 A CN 115278209A
Authority
CN
China
Prior art keywords
test
target
camera
target test
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210662788.7A
Other languages
Chinese (zh)
Inventor
黄伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yanding Information Technology Co ltd
Original Assignee
Shanghai Yanding Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yanding Information Technology Co ltd filed Critical Shanghai Yanding Information Technology Co ltd
Priority to CN202210662788.7A priority Critical patent/CN115278209A/en
Publication of CN115278209A publication Critical patent/CN115278209A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for

Abstract

The utility model aims at providing a camera test system based on intelligence walking robot, it just can only to need test engineer to put the camera that awaits measuring in the equipment compartment, the test process of whole camera all is the automatic test of accomplishing of cooperation between intelligence walking robot and the analytical equipment, need not artificial intervention, also need not the manual work and go to intervene the calibration, guaranteed that data automatic acquisition and automatic generation are required and the relevant information of the shooting parameter of camera, so that the user only need through browsing the shooting parameter information just can be clear understand the good or bad performance of every camera, but also the repeatability is excellent, test environment is stable, the equipment precision is high, data is high-efficient reliable, can improve test engineer's work efficiency greatly, reduce a large amount of operating time, release out the engineer from heavy physical power, among the mental labor, the reliable confidence of data is also high.

Description

Camera test system based on intelligent walking robot
Technical Field
The application relates to the technical field of camera testing, in particular to a camera testing system based on an intelligent walking robot.
Background
Along with the increasing demand of domestic optical laboratories for intelligent camera testing, in the prior art, the field of automatic camera testing is performed, the cameras are generally placed at different test points in a single fixed mode, problems of inaccurate testing, rigid testing, complex operation and the like are caused, and the fixed moving mode has many defects, such as manually adding the test points with fixed positions into a map in advance, and not supporting fine adjustment of the positions of intelligent trolleys of the test points, so that if the test points are not well calibrated, the test points cannot be finely adjusted and are inflexible, the positions of the test points need to be manually adjusted again, the workload of testing is invisibly increased, and the testing efficiency is reduced.
Disclosure of Invention
An object of this application is to provide a camera test system based on intelligence walking robot, has solved inaccurate, the rigid, the complicated scheduling problem of complex operation of test that the single fixed mode in current camera test field brought, has promoted efficiency of software testing and test accuracy.
According to an aspect of the present application, there is provided a camera testing system based on an intelligent walking robot, wherein the system includes an intelligent walking robot and an analyzing apparatus, the intelligent walking robot includes a robot arm, the system includes the following testing steps:
the method comprises the following steps that firstly, a camera to be detected in an equipment bin is grabbed through a mechanical arm of the intelligent walking robot and is moved to a preset initial position;
secondly, the intelligent walking robot determines a target test scene, acquires the position of a target test point where the target test scene is located, and plans a navigation path from the initial position to the position of the target test point;
thirdly, the intelligent walking robot carries the camera to be tested and navigates to the target test point based on the navigation path;
step four, opening the camera to be tested to shoot the target test scene, analyzing the reference point of the shot picture and aligning the target test scene through the movement of the mechanical arm;
after the alignment is finished, recording a section of test video for the target test scene through the camera to be tested, and uploading the test video to the analysis equipment;
step six, the analysis equipment calls a target video analysis algorithm corresponding to the target test scene, and performs video analysis on the test video to obtain a data set related to the shooting parameters of the camera to be tested;
step seven, the analysis equipment calls a target keyword set corresponding to the target test scene, keyword extraction is carried out on the data set according to the target keyword set, and shooting parameter information of the camera to be tested is obtained, wherein the target keyword set comprises at least one target keyword, and the shooting parameter information comprises at least one item of shooting parameter and a parameter value corresponding to each item of shooting parameter.
Further, in the above system, the analyzing device is further configured to:
calling the evaluation weight of each shooting parameter corresponding to the target test scene;
and evaluating the shooting performance of the camera to be tested according to the shooting parameter information of the camera to be tested and the evaluation weight of each shooting parameter to obtain the shooting performance evaluation value of the camera to be tested in the target test scene.
Further, in the above system, the analyzing device is further configured to:
acquiring test requirements of a user on the camera to be tested in the target test scene, wherein the test requirements comprise attention indexes of the user on different shooting parameters of the camera to be tested in the target test scene;
and setting the evaluation weight of each shooting parameter in the target test scene according to the test requirement.
Further, in the above system, the intelligent walking robot is further configured to:
storing test points and positions thereof of different test scenes;
the method for determining the target test scene and acquiring the position of the target test point of the target test scene by the intelligent walking robot comprises the following steps:
and the intelligent walking robot determines a target test scene according to the acquired real-time scene test requirements, and inquires and acquires the position of the target test point of the target test scene from the test points and the positions of the different test scenes.
Further, in the above system, the analyzing device is further configured to:
storing video analysis algorithms corresponding to different test scenes;
before the analysis device invokes the target video analysis algorithm corresponding to the target test scene, the analysis device is further configured to:
and according to the target test scene, inquiring from the video analysis algorithms corresponding to the different test scenes, and matching to the target video analysis algorithm corresponding to the target test scene.
Further, in the above system, the analyzing device is further configured to:
storing keyword sets corresponding to different test scenes, wherein the keyword sets comprise at least one keyword;
before the analysis device invokes the target keyword set corresponding to the target test scenario, the analysis device is further configured to:
and according to the target test scene, inquiring from the keyword sets corresponding to the different test scenes, and matching to a target keyword set corresponding to the target test scene.
Compared with the prior art, the camera test system based on intelligence walking robot that this application embodiment provided, wherein, this system includes intelligence walking robot and analytical equipment, intelligence walking robot includes the arm, and this system includes following test step: the method comprises the following steps that firstly, a camera to be detected in an equipment bin is grabbed through a mechanical arm of the intelligent walking robot and is moved to a preset initial position; secondly, the intelligent walking robot determines a target test scene, acquires the position of a target test point where the target test scene is located, and plans a navigation path from the initial position to the position of the target test point; thirdly, the intelligent walking robot carries the camera to be tested and navigates to the target test point based on the navigation path; step four, opening the camera to be tested to shoot the target test scene, analyzing the reference point of the shot picture and aligning the target test scene through the movement of the mechanical arm; after the alignment is finished, recording a section of test video for the target test scene through the camera to be tested, and uploading the test video to the analysis equipment; step six, the analysis equipment calls a target video analysis algorithm corresponding to the target test scene, and performs video analysis on the test video to obtain a data set related to the shooting parameters of the camera to be tested; and step seven, the analysis equipment calls a target keyword set corresponding to the target test scene, and performs keyword extraction on the data set according to the target keyword set to obtain shooting parameter information of the camera to be tested, wherein the target keyword set comprises at least one target keyword, and the shooting parameter information comprises at least one shooting parameter and a parameter value corresponding to each shooting parameter. In the practical application scene, the test engineer only needs to place the camera to be tested in the equipment bin to do other things, the test process of the whole camera is the test automatically completed by the intelligent walking robot and the analysis equipment in a matched mode, manual intervention is not needed, manual intervention and calibration are not needed, the automatic acquisition of data and the automatic generation of the required information related to the shooting parameters of the camera are ensured, so that a user can clearly know the performance of each camera only by browsing the shooting parameter information, meanwhile, the repeatable performance is excellent, the test environment is stable, the equipment precision is high, the data are efficient and reliable, the work efficiency of the test engineer can be greatly improved, a large amount of work time is reduced, the engineer is released from heavy physical and mental labor, and the reliable confidence level of the data is also high.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a test flow diagram of a smart walking robot based camera testing system according to one aspect of the present application;
the same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
One aspect of the present application provides a camera test system based on an intelligent walking robot, wherein the system comprises an intelligent walking robot and an analysis device, wherein the intelligent walking robot comprises but is not limited to a walking robot capable of acquiring pictures, videos, characters and the like, for example, the intelligent walking robot comprises but is not limited to a smart car and the like, the intelligent walking robot comprises a mechanical arm, and the system comprises the following test steps:
the method comprises the following steps that firstly, a camera to be detected in an equipment bin is grabbed through a mechanical arm of the intelligent walking robot and is moved to a preset initial position;
secondly, the intelligent walking robot determines a target test scene, acquires the position of a target test point of the target test scene, and plans a navigation path from the initial position to the position of the target test point;
thirdly, the intelligent walking robot carries the camera to be tested and navigates to the target test point based on the navigation path;
opening the camera to be tested to photograph the target test scene, analyzing a reference point of a photographed picture, and aligning the reference point to the target test scene through the movement of the mechanical arm;
after the alignment is finished, recording a section of test video for the target test scene through the camera to be tested, and uploading the test video to the analysis equipment; herein, the analysis device includes, but is not limited to, a computer, a server, and an analysis background.
Step six, the analysis equipment calls a target video analysis algorithm corresponding to the target test scene, and performs video analysis on the test video to obtain a data set related to the shooting parameters of the camera to be tested;
step seven, the analysis equipment calls a target keyword set corresponding to the target test scene, keyword extraction is carried out on the data set according to the target keyword set, and shooting parameter information of the camera to be tested is obtained, wherein the target keyword set comprises at least one target keyword, and the shooting parameter information comprises at least one item of shooting parameter and a parameter value corresponding to each item of shooting parameter.
Through the first step to the seventh step, in an actual application scene, a test engineer can do other things only by placing a camera to be tested in an equipment bin, the test process of the whole camera is a test automatically completed by matching between the intelligent walking robot and the analysis equipment, manual intervention is not needed, manual intervention and calibration are not needed, the automatic acquisition of data and the automatic generation of required information related to the shooting parameters of the camera are ensured, so that a user can clearly know the performance of each camera only by browsing the shooting parameter information, the repeatability is excellent, the test environment is stable, the equipment precision is high, the data is efficient and reliable, the work efficiency of the test engineer can be greatly improved, a large amount of work time is reduced, the engineer is released from heavy physical and mental labor, and the reliable confidence of the data is also high.
As shown in fig. 1, in an actual application scenario, one or more cameras to be tested are placed in an equipment bin, and when the cameras to be tested in the equipment bin need to be tested, a mechanical arm on the intelligent walking robot automatically grips the cameras to be tested in the equipment bin and moves the mechanical arm to a preset initial position, so that the initial position of each camera to be tested is the same position. Then, the intelligent walking robot determines a target test scene to be tested, and moves from the initial position to the position of the target test point by adopting free navigation according to the position of the target test point where the target test scene to be shot is located, so that the free navigation from the initial position to the position of the target test point is realized. Then, the intelligent walking robot opens a camera to be tested, which is grabbed on the mechanical arm, to shoot the target test scene, and analyzes the reference point of the shot picture: and the Mark points automatically align the target test scene through the mechanical arm, so that the shooting alignment of the target test scene is realized. And then, after the scene alignment is completed, recording a section of test video for the target test scene through the camera to be tested, and uploading the recorded test video to the analysis equipment. Then, the analysis equipment calls a target video analysis algorithm API corresponding to the target test scene to analyze the test video, so that the analysis video obtains a string of shooting parameters including the camera to be tested: noise, color, exposure, white balance, time, etc. Then, the analysis device calls a target keyword set corresponding to the target test scene, extracts some key data in a data set according to keywords in the target keyword set, and generates shooting parameter information of the camera to be tested, wherein the shooting parameter information includes one or more shooting parameters and corresponding parameter values thereof, the shooting parameters include but are not limited to camera noise, color, exposure, white balance, time and other parameters, and the expression form of the shooting parameter information can be a table, a bar graph, a circular graph, a report and the like, so as to meet the requirements of users in different viewing scenes. The different test scenes and/or the cameras to be tested are replaced, all the steps are repeated, the shooting parameter information of the cameras to be tested in different test scenes and/or the shooting parameter information of the different cameras to be tested in the same test scene can be obtained, the test requirements of the same camera to be tested in different test scenes and the test requirements of the same camera to be tested in the same test scene are met, the test process is more flexible and free, and different test requirements of a user in practical application scenes can be met.
Following the above-described embodiments of the present application, the analysis device is further configured to:
calling the evaluation weight of each shooting parameter corresponding to the target test scene;
and evaluating the shooting performance of the camera to be tested according to the shooting parameter information of the camera to be tested and the evaluation weight of each shooting parameter to obtain the shooting performance evaluation value of the camera to be tested in the target test scene.
For example, in different test scenarios, the evaluation weight of each shooting parameter of the camera to be tested is different, and therefore, in an actual application scenario, after a target test scenario is determined, in order to determine the shooting performance of the camera to be tested in the target test scenario, the evaluation weight of each shooting parameter corresponding to the target test scenario needs to be called first; and then according to the shooting parameter information of the camera to be tested and the evaluation weight of each shooting parameter in the target test scene, evaluating the shooting performance of the camera to be tested in the target test scene to obtain the shooting performance evaluation value of the camera to be tested in the target test scene, and realizing the directional evaluation of the shooting performance of the camera to be tested in the target test scene.
Following the above-described embodiments of the present application, the analysis device is further configured to:
acquiring test requirements of a user on the camera to be tested in the target test scene, wherein the test requirements comprise attention indexes of the user on different shooting parameters of the camera to be tested in the target test scene;
and setting the evaluation weight of each shooting parameter in the target test scene according to the test requirement.
For example, in order to meet the evaluation requirements of each shooting parameter in different test scenes, before the analysis device performs a test on an actual application scene, the analysis device further needs to acquire the test requirements of the user on each camera to be tested in the target test scene, where the test requirements include the attention indexes of the user on the different shooting parameters of the camera to be tested in the target test scene, and set the evaluation weight of each shooting parameter in the target test scene according to the test requirements, so that the evaluation weights can be calculated for the different shooting parameters during a subsequent test on the performance of the camera in the target test scene, thereby improving the accuracy of the test on the performance of the camera to be tested in the target test scene.
Next, in order to ensure that the test of the fixed-point position is performed on different test scenes, the intelligent walking robot further stores the test points and the positions thereof where the different test scenes are located, and the test points and the positions thereof corresponding to the different test scenes are embodied, so that the intelligent walking robot can autonomously position and navigate to the position of the target test point where the target test scene is located based on the initial position to perform the shooting test corresponding to the target test scene. In an actual application scenario of the present application, when the intelligent walking robot determines a target test scenario and obtains a position of a target test point where the target test scenario is located, the intelligent walking robot determines the target test scenario according to an obtained real-time scenario test requirement, and queries and obtains the position of the target test point where the target test scenario is located from test points and positions where different test scenarios are located, so as to query and determine the target test point where the target test scenario is located and the position of the target test point. Here, the real-time scenario test requirement may be actively input by a user of the test, or may be obtained by transmitting the real-time scenario test requirement through a third-party device.
Next, in the embodiment of the present application, in order to implement directional analysis on videos shot by cameras in different test scenes, the analysis device further stores video analysis algorithms corresponding to the different test scenes; in a subsequent practical application scene, before the analysis equipment calls a target video analysis algorithm corresponding to the target test scene, the analysis equipment queries from the video analysis algorithms corresponding to different test scenes according to the target test scene and matches the target video analysis algorithm corresponding to the target test scene, so that the test video shot by the camera to be tested in the target test scene is directionally analyzed through the target video analysis algorithm corresponding to the target test scene, and more accurate shooting parameter information is obtained.
Next, in the above embodiment of the present application, in order to facilitate performing directional keyword extraction on a data set related to shooting parameters of a camera to be tested in different test scenarios, the analysis device further stores keyword sets corresponding to different test scenarios, where the keyword sets include at least one keyword, so that the subsequent processing is performed in an actual application scenario, before the analysis device calls a target keyword set corresponding to the target test scenario, the analysis device queries from the keyword sets corresponding to the different test scenarios according to the target test scenario, and matches the target keyword set corresponding to the target test scenario, so as to perform keyword extraction on the data set through the target keyword set corresponding to the target test scenario, thereby obtaining shooting parameter information of the camera to be tested in the target test scenario, and thus improving accuracy of shooting parameter information of the camera to be tested in the target test scenario.
Next, in all the embodiments of the present application, shooting parameter information corresponding to different cameras to be tested in the same test scenario may be acquired, an expression form of the shooting parameter information may be preferably a form of a report table, and the shooting parameter information in the form of the report table of the different cameras to be tested is subjected to weighting calculation of different shooting parameters to obtain shooting performance evaluation values of the different cameras to be tested, where the higher the shooting performance evaluation value is, the better the shooting performance of the camera to be tested in the same test scenario is, so that a user can identify and select the different cameras to be tested through the difference of the shooting performance evaluation values.
Through the above-mentioned embodiment of this application provide a camera test system based on intelligence walking robot, not only solved the test that current camera test field's single fixed mode brought inaccurate, the rigid, complex operation scheduling problem, realize freely navigating to the position of the test point that every test scene corresponds through intelligence walking robot, the manual test point that adds that need not test engineer, still through the intelligence walking robot position fine setting at the test point etc. test accuracy and efficiency of software testing have been promoted.
To sum up, the camera test system based on intelligence walking robot that this application embodiment provided, this system package intelligence walking robot and analytical equipment, intelligence walking robot includes the arm, and this system includes following test step: the method comprises the following steps that firstly, a camera to be detected in an equipment bin is grabbed through a mechanical arm of the intelligent walking robot and is moved to a preset initial position; secondly, the intelligent walking robot determines a target test scene, acquires the position of a target test point where the target test scene is located, and plans a navigation path from the initial position to the position of the target test point; thirdly, the intelligent walking robot carries the camera to be tested and navigates to the target test point based on the navigation path; step four, opening the camera to be tested to shoot the target test scene, analyzing the reference point of the shot picture and aligning the target test scene through the movement of the mechanical arm; after the alignment is finished, recording a section of test video for the target test scene through the camera to be tested, and uploading the test video to the analysis equipment; step six, the analysis equipment calls a target video analysis algorithm corresponding to the target test scene, and performs video analysis on the test video to obtain a data set related to the shooting parameters of the camera to be tested; and step seven, the analysis equipment calls a target keyword set corresponding to the target test scene, and performs keyword extraction on the data set according to the target keyword set to obtain shooting parameter information of the camera to be tested, wherein the target keyword set comprises at least one target keyword, and the shooting parameter information comprises at least one shooting parameter and a parameter value corresponding to each shooting parameter. In the practical application scene, only need test engineer put the camera that awaits measuring in the equipment storehouse just can, the test process of whole camera all is the automatic test of accomplishing of cooperation between intelligent walking robot and the analytical equipment, need not artificial intervention, also need not artifical and go to intervene the calibration, guaranteed that data automatic acquisition and automatic generation are required and the relevant information of the shooting parameter of camera, so that the user only need through browse shooting parameter information just can be clear understand the performance of every camera, but also repeatability is excellent, the test environment is stable, equipment precision is high, data is high-efficient reliable, can improve test engineer's work efficiency greatly, reduce a large amount of operating time, release out engineer from heavy physical power, brainwork, the reliable letter degree of data is also high.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (6)

1. A camera test system based on intelligent walking robot, wherein, this system includes intelligent walking robot and analytical equipment, intelligent walking robot includes the arm, and this system includes following test step:
the method comprises the following steps that firstly, a camera to be detected in an equipment bin is grabbed through a mechanical arm of the intelligent walking robot and is moved to a preset initial position;
secondly, the intelligent walking robot determines a target test scene, acquires the position of a target test point where the target test scene is located, and plans a navigation path from the initial position to the position of the target test point;
thirdly, the intelligent walking robot carries the camera to be tested and navigates to the target test point based on the navigation path;
step four, opening the camera to be tested to shoot the target test scene, analyzing the reference point of the shot picture and aligning the target test scene through the movement of the mechanical arm;
after the alignment is finished, recording a section of test video for the target test scene through the camera to be tested, and uploading the test video to the analysis equipment;
step six, the analysis equipment calls a target video analysis algorithm corresponding to the target test scene, and performs video analysis on the test video to obtain a data set related to the shooting parameters of the camera to be tested;
and step seven, the analysis equipment calls a target keyword set corresponding to the target test scene, and performs keyword extraction on the data set according to the target keyword set to obtain shooting parameter information of the camera to be tested, wherein the target keyword set comprises at least one target keyword, and the shooting parameter information comprises at least one shooting parameter and a parameter value corresponding to each shooting parameter.
2. The system of claim 1, wherein the analysis device is further to:
calling the evaluation weight of each shooting parameter corresponding to the target test scene;
and evaluating the shooting performance of the camera to be tested according to the shooting parameter information of the camera to be tested and the evaluation weight of each shooting parameter to obtain the shooting performance evaluation value of the camera to be tested in the target test scene.
3. The system of claim 2, wherein the analysis device is further to:
acquiring test requirements of a user on the camera to be tested in the target test scene, wherein the test requirements comprise attention indexes of the user on different shooting parameters of the camera to be tested in the target test scene;
and setting the evaluation weight of each shooting parameter in the target test scene according to the test requirement.
4. The system of claim 1, wherein the intelligent walking robot is further configured to:
storing test points and positions thereof of different test scenes;
the method for determining the target test scene and acquiring the position of the target test point of the target test scene by the intelligent walking robot comprises the following steps:
and the intelligent walking robot determines a target test scene according to the acquired real-time scene test requirements, and inquires and acquires the position of the target test point of the target test scene from the test points and the positions of the different test scenes.
5. The system of claim 1, wherein the analysis device is further to:
storing video analysis algorithms corresponding to different test scenes;
before the analysis device invokes the target video analysis algorithm corresponding to the target test scene, the analysis device is further configured to:
and according to the target test scene, inquiring from the video analysis algorithms corresponding to the different test scenes, and matching to the target video analysis algorithm corresponding to the target test scene.
6. The system of claim 1, wherein the analysis device is further to:
storing keyword sets corresponding to different test scenes, wherein the keyword sets comprise at least one keyword;
before the analysis device invokes the target keyword set corresponding to the target test scenario, the analysis device is further configured to:
and according to the target test scene, inquiring from the keyword sets corresponding to the different test scenes, and matching to a target keyword set corresponding to the target test scene.
CN202210662788.7A 2022-06-13 2022-06-13 Camera test system based on intelligent walking robot Pending CN115278209A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210662788.7A CN115278209A (en) 2022-06-13 2022-06-13 Camera test system based on intelligent walking robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210662788.7A CN115278209A (en) 2022-06-13 2022-06-13 Camera test system based on intelligent walking robot

Publications (1)

Publication Number Publication Date
CN115278209A true CN115278209A (en) 2022-11-01

Family

ID=83759054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210662788.7A Pending CN115278209A (en) 2022-06-13 2022-06-13 Camera test system based on intelligent walking robot

Country Status (1)

Country Link
CN (1) CN115278209A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116886892A (en) * 2023-09-05 2023-10-13 功道(深圳)科技实业有限公司 Access control management method based on multi-source data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116886892A (en) * 2023-09-05 2023-10-13 功道(深圳)科技实业有限公司 Access control management method based on multi-source data
CN116886892B (en) * 2023-09-05 2023-12-29 功道(深圳)科技实业有限公司 Access control management method based on multi-source data

Similar Documents

Publication Publication Date Title
CN109977813B (en) Inspection robot target positioning method based on deep learning framework
CN111007661B (en) Microscopic image automatic focusing method and device based on deep learning
WO2017149869A1 (en) Information processing device, method, program, and multi-camera system
CN107687841A (en) A kind of distance-finding method and device
CN113382155B (en) Automatic focusing method, device, equipment and storage medium
CN106934790B (en) A kind of evaluation method of image definition, the method focused automatically and related device
CN110033481A (en) Method and apparatus for carrying out image procossing
US7627153B2 (en) Repositioning inaccuracies in an automated imaging system
CN112734858B (en) Binocular calibration precision online detection method and device
CN115278209A (en) Camera test system based on intelligent walking robot
CN109146880A (en) A kind of electric device maintenance method based on deep learning
CN108055532A (en) Automate the method and apparatus of matching test card
CN110111341B (en) Image foreground obtaining method, device and equipment
CN111970500A (en) Automatic distance step calibration method and system for projection equipment
CN109614512B (en) Deep learning-based power equipment retrieval method
CN113888583A (en) Real-time judgment method and device for visual tracking accuracy
CN109345560B (en) Motion tracking precision testing method and device of augmented reality equipment
CN108781280B (en) Test method, test device and terminal
CN112304411A (en) Weighing system with object recognition and weighing method
CN116980757A (en) Quick focusing method, focusing map updating method, device and storage medium
CN115134586A (en) Anti-shake test system of camera
JPH10254903A (en) Image retrieval method and device therefor
CN108769670A (en) A kind of method and system carrying out repeatability precision test to kinematic system with camera
CN105637344A (en) Image processing device, program, storage medium, and image processing method
CN115641499B (en) Photographing real-time positioning method, device and storage medium based on street view feature library

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination