CN115629831A - Data acquisition method, device, equipment and storage medium for equipment interface - Google Patents

Data acquisition method, device, equipment and storage medium for equipment interface Download PDF

Info

Publication number
CN115629831A
CN115629831A CN202211110854.6A CN202211110854A CN115629831A CN 115629831 A CN115629831 A CN 115629831A CN 202211110854 A CN202211110854 A CN 202211110854A CN 115629831 A CN115629831 A CN 115629831A
Authority
CN
China
Prior art keywords
interface
image
main interface
sub
target sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211110854.6A
Other languages
Chinese (zh)
Inventor
单超炳
龚小龙
郑聪
麻志毅
陈曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Original Assignee
Advanced Institute of Information Technology AIIT of Peking University
Hangzhou Weiming Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Institute of Information Technology AIIT of Peking University, Hangzhou Weiming Information Technology Co Ltd filed Critical Advanced Institute of Information Technology AIIT of Peking University
Priority to CN202211110854.6A priority Critical patent/CN115629831A/en
Publication of CN115629831A publication Critical patent/CN115629831A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19173Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a data acquisition method, a device, equipment and a storage medium for an equipment interface, wherein the method comprises the following steps: identifying a required main interface according to a pre-trained main interface discriminator, and automatically presenting the required main interface on equipment to be acquired; determining an image template to be acquired, positioning a target sub-interface similar to the image template on the required main interface, and automatically restoring a shielding part or a part exceeding the screen boundary of the target sub-interface; and intelligently segmenting the target sub-interface to obtain a single character image, and inputting the single character image into a pre-trained character classification discriminator to obtain the identified data to be acquired. According to the data acquisition method provided by the application, the target interface to be acquired can be automatically identified and presented, the data can be intelligently segmented, all characters can be effectively positioned, the segmented single character can be identified, and the identification accuracy and efficiency are greatly improved.

Description

Data acquisition method, device, equipment and storage medium for equipment interface
Technical Field
The present invention relates to the field of data acquisition technologies, and in particular, to a method, an apparatus, a device, and a storage medium for acquiring data on an equipment interface.
Background
Under the influence of various factors such as technical progress, business model change, consumption upgrade, labor cost rise and the like, a plurality of industry patterns of the large industrial age have changed, and for various industries, digital transformation is not a selectable option but a necessary option. In the process of digital transformation, the acquisition of multi-source heterogeneous (structured, unstructured and the like) multi-modal data is the source. Such as production facilities in industry, government systems in government departments, facility instruments in medical institutions, market analysis software in financial institutions, shopping systems in retail industries, etc. all have a large amount of multi-source heterogeneous multimodal (text, picture, video, voice, etc.) data. However, in these industries, there are situations where data is difficult to obtain from equipment or software systems. For example, in industrial production, a large amount of production equipment comes from domestic or international procurement, and the problem that the procurement equipment does not open a data interface for an enterprise exists. Meanwhile, the supporting software system can be developed by a third party, and due to the third party operation problem, the situation that software is not updated and maintained occurs frequently, the data acquisition of equipment and the software system depends on the outside seriously, and the problems greatly delay the digitization process of various industries.
In many industries, due to the lack of core operation data of the equipment, part of units still acquire equipment and system data in a way of looking at, taking a hand and recording a brain, however, the methods have the following disadvantages: manpower is consumed, a large amount of manpower is needed for manually copying data, and manpower cost is brought; the process is complex, and the complexity of the equipment interface brings acquisition difficulty, such as interface classification problem, interface overlapping problem, interface displacement problem and the like; errors are easy to occur, the efficiency is unstable due to the dependence on manual labor, and the entry is easy to occur, so that the result is inaccurate, and the service is influenced; the response is slow. The data of the equipment and the system can not be obtained in real time, and the decision response is not timely.
Disclosure of Invention
The embodiment of the application provides a data acquisition method, a data acquisition device, equipment and a storage medium for an equipment interface. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present application provides a method for acquiring data of an equipment interface, including:
identifying a required main interface according to a pre-trained main interface discriminator, and automatically presenting the required main interface on equipment to be acquired;
determining an image template to be acquired, positioning a target sub-interface similar to the image template on a required main interface, and automatically restoring a shielding part of the target sub-interface or a part exceeding the screen boundary;
and intelligently segmenting the target sub-interface to obtain a single character image, and inputting the single character image into a pre-trained character classification discriminator to obtain the identified data to be acquired.
In some optional embodiments, identifying the required main interface according to a pre-trained main interface arbiter, and automatically presenting the required main interface on the device to be acquired, includes:
intercepting a main interface of equipment to be acquired;
inputting the intercepted main interface into a pre-trained main interface discriminator, and judging whether the intercepted main interface is a required main interface or not;
when the intercepted main interface is the required main interface, automatically presenting the required main interface;
when the intercepted main interface is a desktop interface, automatically searching and presenting the required main interface through a keyboard and mouse controller;
and when the intercepted main interface is other software interfaces, switching the interface to a desktop interface through the keyboard and mouse controller, and automatically searching and presenting the required main interface through the keyboard and mouse controller.
In some optional embodiments, before identifying the required main interface according to the pre-trained main interface discriminator, the method further comprises:
intercepting a preset number of main interfaces, wherein the preset number of main interfaces comprise a required main interface, a desktop interface and other software interfaces;
dividing a preset number of main interfaces into a training set, a testing set and a verification set;
and training the main interface discriminator according to the training set, the test set, the verification set and the classification neural network model to obtain the trained main interface discriminator.
In some optional embodiments, determining an image template to be acquired, positioning a target sub-interface similar to the image template on the required main interface, and automatically restoring an occluded part or a part beyond a screen boundary of the target sub-interface, includes:
intercepting an image template to be acquired on a required main interface, and acquiring the length and width of the image template;
obtaining the similarity between each position in the required main interface and the image template according to the required main interface, the image template and a correlation coefficient matching algorithm;
taking the position coordinate with the maximum similarity as an initial coordinate, taking the initial coordinate as a starting point, and intercepting sub-images with the length and the width equal to those of the image template on the required main interface;
calculating HSV (hue, saturation and value) matching confidence of the subimages and the image template, and if the HSV matching confidence is greater than or equal to a preset threshold, taking the subimages as target subinterfaces;
if the HSV matching confidence coefficient is smaller than the preset threshold, the sub-image is shielded or exceeds the screen boundary, the shielded part of the sub-image or the part exceeding the screen boundary is automatically recovered, and if the HSV matching confidence coefficient of the recovered sub-image is larger than or equal to the preset threshold, the recovered sub-image is the target sub-interface.
In some optional embodiments, obtaining the similarity between each position in the required main interface and the image template according to the required main interface, the image template and the correlation coefficient matching algorithm includes:
carrying out graying processing on the required main interface and the image template to respectively obtain a gray matrix;
and inputting the initial point coordinate of the required main interface, the initial point coordinate of the image template, the gray matrix of the required main interface and the gray matrix of the image template into a correlation coefficient matching algorithm to obtain the similarity between each position in the required main interface and the image template.
In some optional embodiments, automatically restoring the blocked part or the part beyond the screen boundary of the sub-image, and if the HSV matching confidence of the restored sub-image is greater than or equal to a preset threshold, the restored sub-image is a target sub-interface, including:
clicking the part of the sub-image which is not shielded through a keyboard and mouse controller to obtain the restored sub-image;
calculating HSV (hue, saturation and value) matching confidence of the restored sub-image and the image template, and if the HSV matching confidence is greater than or equal to a preset threshold value, taking the restored sub-image as a target sub-interface;
if the HSV matching confidence is smaller than a preset threshold, uniformly dividing the required main interface into four quadrants by taking the center of the required main interface as an origin, dragging the subimage by using a keyboard and mouse controller, and moving the subimage to the centrosymmetric quadrant direction of the quadrant in which the subimage is positioned by a preset distance to obtain a recovered subimage;
and calculating HSV (hue, saturation and value) matching confidence of the restored sub-image and the image template, wherein if the HSV matching confidence is greater than or equal to a preset threshold, the restored sub-image is a target sub-interface.
In some optional embodiments, intelligently segmenting the target sub-interface to obtain a single character image comprises:
converting the target sub-interface into a gray scale image to obtain a gray scale matrix corresponding to the target sub-interface;
converting the gray scale image into a binary image according to the gray scale matrix corresponding to the target sub-interface;
accumulating the pixel values of each column and the pixel values of each row in the binary image, taking the junction of the abrupt change point of the accumulated pixel values of each column and the abrupt change point of the pixel values of each row as a segmentation point, and segmenting a single character image.
In a second aspect, an embodiment of the present application provides an apparatus for acquiring data of an equipment interface, including:
the interface automatic identification and presentation module is used for identifying a required main interface according to the pre-trained main interface discriminator and automatically presenting the required main interface on the equipment to be acquired;
the target sub-interface determining module is used for determining an image template to be acquired, positioning a target sub-interface similar to the image template on the required main interface, and automatically restoring the shielding part of the target sub-interface or the part exceeding the screen boundary;
and the interface data acquisition module is used for intelligently segmenting the target sub-interface to obtain a single character image, and inputting the single character image into the pre-trained character classification discriminator to obtain the identified data to be acquired.
In a third aspect, an embodiment of the present application provides a data acquisition device for an equipment interface, including a processor and a memory storing program instructions, where the processor is configured to execute the data acquisition method for the equipment interface provided in the foregoing embodiment when executing the program instructions.
In a fourth aspect, the present application provides a computer-readable medium, on which computer-readable instructions are stored, where the computer-readable instructions are executed by a processor to implement the data acquisition method for the device interface provided in the foregoing embodiment.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the data acquisition method for the equipment interface, provided by the embodiment of the application, can automatically identify and present the required main interface, is different from common data acquisition, needs to manually click the visual interface to be arranged at the forefront, and cannot handle the situation that the sub-interface in the interface can be manually moved. The method creatively uses the main interface matching and template matching technology, intelligently presents the main interface to be collected at the forefront of the screen, and can adjust the partially shielded sub-interface to be completely visible. And the method creatively uses a data intelligent segmentation technology, can effectively position all characters, combines a data intelligent identification technology, identifies character categories, achieves 100% accuracy, greatly improves working efficiency, and greatly reduces hand-making error rate. The method does not affect the normal use of the equipment, allows the change and displacement of the visual interface, can monitor the data change in real time, automatically pauses when an operator uses the equipment, automatically restarts after the use is finished, and does not stop the flow.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a method for data collection for an equipment interface in accordance with an exemplary embodiment;
FIG. 2 is a block diagram illustrating a data collection method for an equipment interface in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method for data collection for an equipment interface in accordance with an exemplary embodiment;
FIG. 4 is a diagram illustrating a host interface smart recognition and presentation, according to an exemplary embodiment;
FIG. 5 is a diagram illustrating intelligent matching and restoration of templates, according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating intelligent segmentation and recognition of data according to an exemplary embodiment;
FIG. 7 illustrates a sample presentation of required acquisition sub-interface data in accordance with an exemplary embodiment;
FIG. 8 is a schematic diagram illustrating a data acquisition device of an equipment interface in accordance with an exemplary embodiment;
FIG. 9 is a schematic diagram of an electronic device shown in accordance with an exemplary embodiment;
FIG. 10 is a schematic diagram illustrating a computer storage medium in accordance with an exemplary embodiment.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the invention to enable those skilled in the art to practice them.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of systems and methods consistent with certain aspects of the invention, as detailed in the appended claims.
At present, no matter various software and hardware facilities such as production equipment and government affair systems, related data are core basic data assets of various industry units, are the basis of digital transformation, and need to break through the blockage of equipment data interfaces, get rid of the constraint of third-party software system openers, and capture the core data in hands. In view of the above problems, some technical solutions exist in the market, including the following two solutions: the method is based on an OCR optical character recognition technology, identifies data information of a designated area of a screen, and writes the data information into an excel file or a database. And secondly, data acquisition based on Robot Process Automation (RPA), a data acquisition process is solidified, timing operation is carried out, tedious and boring work of workers is replaced, and the process robot can work continuously for 24 hours. However, the above conventional solutions have some problems.
The first scheme is based on an OCR (optical character recognition) technology, and for the condition that an application program interface in a software system screen is not fixed, if the resolution of a main interface changes or the position of a sub-interface changes, the scheme cannot be accurately positioned to the position of a target interface, and the phenomenon of error positioning can occur, so that the acquisition precision is greatly reduced. The method has poor effect on low-pixel-level characters (such as single characters, numbers and the like), is easy to misplace and misclassify, has insufficient identification precision, needs manual proofreading again, and is time-consuming and labor-consuming.
The second scheme is based on robot acquisition, generally solidifies the acquisition process, needs to manually specify a target acquisition interface for a large number of complex interfaces existing in a software system, and lacks the capability of intelligently judging the target acquisition interface; and for complex conditions such as interface superposition and target interface coverage in the system, the data acquisition is more difficult to process, and the data acquisition is stagnated. For example, in an industrial production facility, the interface data may cause the overall pixel shift, the size change, etc. due to the system itself (e.g., resolution change, system stuck, etc.), and the process cannot be reversed, so that the RPA, i.e., the fixed flow, is affected, and thus the scheme is not suitable for the scene where the pixel coordinate or the size changes.
In summary, the conventional data acquisition scheme has the following problems: whether a required target interface exists or not cannot be intelligently judged, and the required target interface cannot be opened; the system interface may have the phenomenon of moving or being shielded, and is difficult to position, so that the acquisition precision is greatly reduced; the existing character recognition model has poor recognition effect on low-pixel characters, and the acquired result needs to be proofread again.
Based on this, the embodiment of the present application provides a data acquisition method for an equipment interface, which can acquire (hidden) visual interface data on any operating system in real time. Meanwhile, a solution for sub-interface resetting is provided, and the following problems are solved by combining the data intelligent segmentation and identification technology: unifying data acquisition scenes suitable for the hidden and unopened main interface, and presenting the main interface to the forefront of a screen; the sub-interfaces can be reset by unifying scenes in which the sub-interfaces are blocked or partially out of the boundary of the main interface; the problem of current OCR model location and recognition accuracy rate low is solved. The method is suitable for the changed flow, the data are monitored in real time, the manual operation equipment can pause the acquisition flow, and the intelligent restart is realized.
The following describes the data acquisition method of the device interface provided in the embodiment of the present application in detail with reference to the accompanying drawings. Referring to fig. 1, the method specifically includes the following steps.
S101, identifying a required main interface according to a pre-trained main interface discriminator, and automatically presenting the required main interface on equipment to be acquired.
The device to be collected may be a production device in industry, a government affairs system device of a government department, a device instrument in a medical institution, a market analysis software device in a financial institution, a shopping system device in a retail industry, and the like, and the embodiments not applied for this application are not specifically limited.
Data on an interface of the device to be acquired is acquired, and first, the acquisition times and the acquisition period may be set, for example, to a single acquisition, a continuous acquisition, or other acquisition mode. And then, the pre-trained main interface discriminator identifies the required main interface and automatically presents the required main interface on the equipment to be acquired.
Specifically, a main interface of the equipment to be acquired is intercepted, the current main interface of the equipment to be acquired can be intercepted through an image interceptor, the intercepted main interface is input into a pre-trained main interface discriminator, whether the intercepted main interface is the required main interface or not is judged, and the required main interface is software or a webpage interface required for acquiring data. And when the intercepted main interface is the desktop interface, automatically searching and presenting the required main interface through a keyboard and mouse controller. For example, the interface is judged as a windows desktop interface by the discriminator, a keyboard and mouse controller is used for automatically clicking a system software icon at a specified position and automatically inputting an account password, or a browser icon is automatically clicked and a required website is automatically input, and a required main interface is presented.
And when the intercepted main interface is other software interfaces, switching the interface to a desktop interface through the keyboard and mouse controller, and automatically searching and presenting the required main interface through the keyboard and mouse controller. For example, the interface is switched to a windows desktop interface by using a keyboard and mouse controller, and then the system software icon is automatically clicked at a designated position by using the keyboard and mouse controller and an account password is automatically input, or a browser icon is automatically clicked and a required website is automatically input, so that a required main interface is presented.
In some optional embodiments, before identifying the required main interface according to the pre-trained main interface discriminator, the method further comprises: intercepting a preset number of main interfaces through a screenshot device, wherein the preset number of main interfaces comprise required main interfaces, desktop interfaces and other software interfaces, labeling the intercepted main interfaces, marking the main interfaces required by data acquisition as the required main interfaces, marking the intercepted windows desktop interfaces as the desktop interfaces, and marking the intercepted other main interfaces unrelated to the acquired data as other software interfaces. Dividing the marked images into a training set, a testing set and a verification set, constructing an image classification neural network model, adopting the image classification model in the prior art, training a main interface discriminator according to the training set, the testing set, the verification set and the classification neural network model, testing the testing set on the basis that the accuracy on the training set and the accuracy on the verification set reach 100%, and if the accuracy on the testing set also reaches 100%, fixing model parameters in a trainer, converting the trainer into the main interface discriminator to obtain the trained main interface discriminator.
Fig. 4 is a schematic diagram illustrating intelligent identification and presentation of a main interface according to an exemplary embodiment, where as shown in fig. 4, first, a large number of main interfaces are intercepted by a graph interceptor, then, classification and labeling are performed on the intercepted main interfaces to construct a training set, a test set, and a verification set, a main interface discriminator is trained according to the training set, the test set, the verification set, and a classification neural network model to obtain a main interface discriminator with an accuracy of 100%, and the main interface discriminator identifies the currently intercepted main interface of the device according to the main interface discriminator to judge whether the intercepted main interface is a required main interface, and when the intercepted main interface is the required main interface, the required main interface is automatically presented, and when the intercepted main interface is a desktop interface, the required main interface is automatically searched and presented by a keyboard and mouse controller. And when the intercepted main interface is other software interfaces, switching the interface to a desktop interface through the keyboard and mouse controller, and automatically searching and presenting the required main interface through the keyboard and mouse controller.
According to the steps, the required main interface can be automatically identified and actively presented in front of the screen of the equipment to be acquired, 1, the data acquisition scene of the hidden and unopened main interface is unified, and the main interface can be presented to the forefront of the screen.
S102, determining an image template to be acquired, positioning a target sub-interface similar to the image template on a required main interface, and automatically restoring a shielding part or a part exceeding the screen boundary of the target sub-interface.
In a possible implementation manner, the interface to be acquired may be a part of the required main interface, the part is firstly intercepted as an image template to be acquired, then, the area to be acquired of the main interface may change, and in the continuous change, a target sub-interface similar to the image template is positioned on the required main interface.
Specifically, an image template to be acquired is intercepted on a required main interface, and the length and width of the image template are acquired. A truncator may be used to truncate the desired sub-interface in the desired main interface as an image template to be captured and output the length and width of the template pixels.
Further, according to the required main interface, the image template and a correlation coefficient matching algorithm, the similarity between each position in the required main interface and the image template is obtained. Firstly, carrying out graying processing on a required main interface and an image template to respectively obtain a grayscale matrix; and then inputting the initial point coordinates of the required main interface, the initial point coordinates of the image template, the gray matrix of the required main interface and the gray matrix of the image template into a correlation coefficient matching algorithm to obtain the similarity between each position in the required main interface and the image template.
In one possible implementation, the correlation coefficient matching algorithm is as follows:
Figure BDA0003843984270000091
wherein, (x, y) represents the start point coordinates of the required main interface, (x ', y') represents the start point coordinates of the image template, T represents the gray matrix of the image template, and I represents the gray matrix of the required main interface. R (x, y) represents the similarity between each position in the required main interface and the image template, and if the pixel size of the required main interface is 1920 × 1080, R (x, y) is a similarity matrix of 1920 × 1080. Each value ranges between 0 and 1, with 1 indicating the best match and 0 indicating no correlation.
And taking the position coordinate with the maximum similarity as an initial coordinate, taking the initial coordinate as a starting point, and intercepting the sub-image with the length and the width equal to those of the image template on the required main interface, wherein the starting point is a point of the upper left corner of the intercepted image.
And further, calculating HSV matching confidence of the sub-image and the image template, wherein if the HSV matching confidence is greater than or equal to a preset threshold, the sub-image is the target sub-interface.
Specifically, the sub-image and the image template are respectively converted into HSV images from RGB three-channel color images, and the specific conversion method comprises the following steps: in an RGB image, R: luminance values in the red channel; g: a brightness value in the green channel; b: luminance values in the blue channel.
R ' = R/255, G ' = G/255, B ' = B/255. Solving HSV:
C max =max(R’,G’,B’),C min =min(R’,G’,B’),Δ=C max -C min
in an HSV image:
if C is max =R’,H=(G’-B’)/(C max -C min )*60;
If C is present max =G’,H=120+(B’-R’)/(C max -C min )*60;
If C is max =B’,H=240+(R’-G’)/(C max -C min )*60;
S=(C max -C min )/C max
V=C max
After HSV values of the subimages and the image templates are calculated, the subimages and the image templates are respectively converted into vectors (H1, S1, V1) and (H2, S2, V2), then a template discriminator with a built-in cosine similarity formula is adopted to calculate HSV matching confidence coefficients of the subimages and the image templates, and if the HSV matching confidence coefficients are larger than or equal to a preset threshold value, the subimages are target subinterfaces. If the HSV matching confidence coefficient is smaller than the preset threshold, the sub-image is shielded or exceeds the screen boundary, the shielded part of the sub-image or the part exceeding the screen boundary is automatically recovered, and if the HSV matching confidence coefficient of the recovered sub-image is larger than or equal to the preset threshold, the recovered sub-image is the target sub-interface. The preset threshold value may be set according to actual conditions, and the embodiment of the present disclosure is not particularly limited.
Specifically, clicking the part of the sub-image which is not shielded through a keyboard and mouse controller to obtain a restored sub-image, calculating HSV (hue, saturation and value) matching confidence of the restored sub-image and the image template, and if the HSV matching confidence is larger than or equal to a preset threshold, taking the restored sub-image as a target sub-interface.
If the HSV matching confidence is smaller than a preset threshold, uniformly dividing the required main interface into four quadrants by taking the center of the required main interface as an origin, dragging the subimage by using a keyboard and mouse controller, and moving the subimage to the centrosymmetric quadrant direction of the quadrant in which the subimage is located by a preset distance, wherein the dragged preset distance can be set automatically according to the actual situation to obtain a restored subimage, calculating the HSV matching confidence of the restored subimage and the image template, and if the HSV matching confidence is larger than or equal to the preset threshold, taking the restored subimage as a target subinterface. And if the matching confidence coefficient of the restored sub-image is still smaller than the preset threshold, re-performing the correlation coefficient matching algorithm, and confirming the sub-image with high correlation again.
And after the target sub-interface is obtained, intercepting the target sub-interface by using an image interceptor.
Fig. 5 is a schematic diagram illustrating intelligent template matching and restoration according to an exemplary embodiment, and as shown in fig. 5, first, an image template to be acquired is determined, then, a sub-image with high similarity to the image template is acquired through a correlation coefficient matching algorithm, the sub-image and the image template are respectively converted into HSV images from an RGB three-channel color map, HSV matching confidence coefficients of the sub-image and the image template are calculated, and if the HSV matching confidence coefficients are greater than or equal to a preset threshold value, the sub-image is a target sub-interface.
If the HSV matching confidence coefficient is smaller than the preset threshold value, the sub-image is shielded or exceeds the screen boundary, and the shielded part of the sub-image or the part exceeding the screen boundary is automatically recovered. Firstly, clicking the part of the image which is not shielded, marking the part of the image to obtain a displayed restored sub-image, and if the HSV matching confidence of the restored sub-image is greater than or equal to a preset threshold value, taking the restored sub-image as a target sub-interface.
If the sub-image is smaller than the preset threshold value and marked (the part which is not blocked is clicked), the sub-image possibly exceeds the screen boundary, and the sub-image is dragged to the center position through the keyboard and mouse controller to obtain the restored sub-image. And calculating HSV (hue, saturation and value) matching confidence of the restored sub-image and the image template, and if the HSV matching confidence is greater than or equal to a preset threshold, taking the restored sub-image as a target sub-interface. And if the matching confidence of the restored sub-images is still smaller than the preset threshold, re-performing the correlation coefficient matching algorithm, and confirming the sub-images with high correlation again.
According to the step, the target sub-interface to be acquired can be intelligently acquired, and the partially blocked sub-interface or the sub-interface beyond the screen boundary is adjusted to be completely visible.
S103, intelligently segmenting the target sub-interface to obtain a single character image, and inputting the single character image into a pre-trained character classification discriminator to obtain identified data to be acquired.
Specifically, the target sub-interface is converted into a gray scale image, and a gray scale matrix corresponding to the target sub-interface is obtained. And converting the gray image into a binary image according to the gray matrix corresponding to the target sub-interface. That is, a threshold value is preset, and elements larger than the threshold value in the gray matrix become 255 and elements smaller than the threshold value become 0.
Accumulating the pixel values of each column and the pixel values of each row in the binary image, finding out the column with sudden change of the accumulated pixel values in each column, finding out the row with sudden change of the accumulated pixel values in each row, taking the junction of the sudden change point of each column and the sudden change point of each row as a segmentation point, and segmenting out a single character image. According to this step, all the non-stick (single) character images can be segmented.
And inputting the single character image into a pre-trained character classification discriminator to obtain the recognized data to be acquired. As shown in fig. 7, for the sub-interface data display sample to be collected, all characters can be effectively located and the table in the interface can be identified by the intelligent segmentation technology.
Training the character classification discriminator is also included before the recognition by the character classification discriminator. Specifically, a marker is used for marking binary patterns for acquiring a large number of single characters, and the binary patterns are divided into a training set and a test set according to the proportion of 8. Establishing a multi-classification trainer which is a convolutional neural network model, wherein the model comprises two convolutional layers, a pooling layer and a full-link layer. And configuring training parameters including total training steps, batch size, learning rate and optimization algorithm. And testing the accuracy of the model by using the test set, and storing the model after the accuracy reaches 100% requirement to obtain the trained character classification discriminator.
Fig. 6 is a schematic diagram illustrating an intelligent segmentation and recognition of data according to an exemplary embodiment, as shown in fig. 6, first, a character classification discriminator is trained, and when the accuracy reaches 100% requirement, a model is saved to obtain a trained character classification discriminator. And inputting the segmented character image into a character classification discriminator to obtain the identified data to be acquired.
And further, intelligently dividing the data and storing the identified data. The specific storage mode is to store the data through a memory, and the memory can store the data to a database or a local folder, including mysql, oracle, sql server, db2, postgresql, xlsx, csv, and the like. And repeating the steps regularly according to the parameters set regularly, and outputting the parameters to a preset database or a preset local folder, wherein the preset database or the preset local folder comprises mysql, oracle, sql server, db2, postgresql, xlsx, csv and the like. The database or local file may be updated in real time.
In order to facilitate understanding of the data acquisition method of the device interface provided in the embodiments of the present application, the following is further described in detail with reference to fig. 2 and 3.
As shown in fig. 2, the data acquisition method provided in the embodiment of the present application includes: firstly, an acquisition mode is set, including the setting of acquisition times (continuous acquisition, single acquisition and interval acquisition) and the setting of an acquisition period. And further, carrying out main interface identification and automatic presentation, namely firstly acquiring a main interface through a graph cutter, then identifying the required main interface according to a pre-trained main interface discriminator, and further automatically presenting the required main interface through a keyboard and mouse controller.
And further, acquiring a target sub-interface to be acquired through template intelligent matching positioning, wherein the method comprises the steps of determining an image template to be acquired through a picture cutter, then acquiring a sub-image with high similarity to the image template through a correlation coefficient matching algorithm, converting the sub-image and the image template into HSV (hue, saturation and value) images from an RGB (Red, green, blue) three-channel color map through an image processor, calculating HSV matching confidence of the sub-image and the image template, and if the HSV matching confidence is greater than or equal to a preset threshold, determining the sub-image as the target sub-interface. If the HSV matching confidence is smaller than a preset threshold, the sub-image is shielded or exceeds the screen boundary, and the shielded part or the part exceeding the screen boundary of the sub-image is automatically restored. Firstly, clicking the part of the image which is not shielded through a keyboard and mouse controller, marking the part of the image to obtain a displayed restored sub-image, and if the HSV matching confidence of the restored sub-image is greater than or equal to a preset threshold value, taking the restored sub-image as a target sub-interface.
If the sub-image is smaller than the preset threshold value and marked (the part which is not blocked is clicked), the sub-image possibly exceeds the screen boundary, and the sub-image is dragged to the center position through the keyboard and mouse controller to obtain the restored sub-image. And calculating HSV (hue, saturation and value) matching confidence of the restored sub-image and the image template, wherein if the HSV matching confidence is greater than or equal to a preset threshold, the restored sub-image is a target sub-interface.
And further, intelligently segmenting and identifying data to obtain identified data to be acquired. And converting the target sub-interface into a gray scale image to obtain a gray scale matrix corresponding to the target sub-interface. And converting the gray level image into a binary image according to the gray level matrix corresponding to the target sub-interface. Accumulating the pixel value of each column and the pixel value of each row in the binary image through an image divider, finding out the column with the accumulated pixel value mutation in each column, finding out the row with the accumulated pixel value mutation in each row, taking the junction of the mutation point of each column and the mutation point of each row as a dividing point, and dividing a single character image.
And training a character classification discriminator. Specifically, a marker is used for marking a binary pattern book for acquiring a large number of single characters, and the binary pattern book is divided into a training set and a testing set according to the proportion of 8. Establishing a multi-classification trainer which is a convolutional neural network model, wherein the model comprises two convolutional layers, a pooling layer and a full-link layer. And configuring training parameters including total training steps, batch size, learning rate and optimization algorithm. And testing the accuracy of the model by using the test set, and storing the model after the accuracy reaches 100% requirement to obtain the trained character classification discriminator.
And identifying the segmented character images through a trained character classification discriminator.
Finally, the identified data is stored, and the data can be stored in a database or a local folder, including mysql, oracle, sql server, db2, postgresql, xlsx, csv, and the like.
Fig. 3 is a flowchart of an intelligent interface identification and data acquisition method, and as shown in fig. 3, the identification and data acquisition method for an equipment interface according to the embodiment of the present disclosure includes five steps of timing setting, intelligent main interface identification presentation, intelligent template matching positioning and restoration, intelligent data segmentation and identification, and data storage.
The data acquisition method provided by the embodiment of the application is different from common data acquisition, the visual interface needs to be clicked at the forefront manually, and the condition that the sub-interface can be moved manually in the interface cannot be processed. The method creatively uses the main interface matching and template matching technology, intelligently presents the main interface to be acquired at the forefront of the screen, and can adjust partial blocked sub-interfaces to be completely visible.
Different from the problem of low precision of low-pixel-level data recognition in the existing OCR optical character recognition, the invention creatively uses a data intelligent segmentation technology, can effectively position all characters, combines the data intelligent recognition technology, recognizes character categories, achieves 100% accuracy, greatly improves the working efficiency and greatly reduces the error rate of hand-copying.
Compared with the traditional RPA automatic robot process, the method does not affect the normal use of the equipment, allows the change and displacement of the visual interface, can monitor the data change in real time, automatically pauses when an operator uses the equipment, automatically restarts after the use is finished, and does not stop the process.
An embodiment of the present application further provides a device for acquiring data of an equipment interface, where the device is configured to execute the method for acquiring data of an equipment interface according to the foregoing embodiment, and as shown in fig. 8, the device includes:
an interface automatic identification and presentation module 801, configured to identify a required main interface according to a pre-trained main interface discriminator, and automatically present the required main interface on a device to be acquired;
a target sub-interface determining module 802, configured to determine an image template to be acquired, locate a target sub-interface similar to the image template on a required main interface, and automatically restore a blocked portion of the target sub-interface or a portion beyond a screen boundary;
and the interface data acquisition module 803 is used for intelligently segmenting the target sub-interface to obtain a single character image, and inputting the single character image into the pre-trained character classification discriminator to obtain the identified data to be acquired.
It should be noted that, when the data acquisition apparatus for an equipment interface provided in the foregoing embodiment executes a data acquisition method for an equipment interface, the division of each functional module is merely used as an example, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the equipment is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the data acquisition device of the device interface and the data acquisition method of the device interface provided in the above embodiments belong to the same concept, and details of implementation processes are described in the method embodiments, which are not described herein again.
The embodiment of the present application further provides an electronic device corresponding to the method for acquiring data of the device interface provided in the foregoing embodiment, so as to execute the method for acquiring data of the device interface.
Referring to fig. 9, a schematic diagram of an electronic device provided in some embodiments of the present application is shown. As shown in fig. 9, the electronic apparatus includes: the system comprises a processor 900, a memory 901, a bus 902 and a communication interface 903, wherein the processor 900, the communication interface 903 and the memory 901 are connected through the bus 902; the memory 901 stores a computer program that can be executed on the processor 900, and when the processor 900 executes the computer program, the data acquisition method of the device interface provided by any of the foregoing embodiments of the present application is executed.
The Memory 901 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 903 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 902 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 901 is used for storing a program, and the processor 900 executes the program after receiving an execution instruction, where the method for acquiring data of an equipment interface disclosed in any embodiment of the present application may be applied to the processor 900, or implemented by the processor 900.
The processor 900 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 900. The Processor 900 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 901, and the processor 900 reads the information in the memory 901, and completes the steps of the method in combination with its hardware.
The electronic equipment provided by the embodiment of the application and the data acquisition method of the equipment interface provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic equipment.
Referring to fig. 10, the illustrated computer-readable storage medium is an optical disc 1000, on which a computer program (i.e., a program product) is stored, and when the computer program is executed by a processor, the computer program may execute the data acquisition method of the device interface provided in any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the data acquisition method of the device interface provided by the embodiment of the present application have the same beneficial effects as the method adopted, operated or implemented by the application program stored in the computer-readable storage medium.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that various changes and modifications can be made by those skilled in the art without departing from the spirit of the invention, and these changes and modifications are all within the scope of the invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A data acquisition method for an equipment interface is characterized by comprising the following steps:
identifying a required main interface according to a pre-trained main interface discriminator, and automatically presenting the required main interface on equipment to be acquired;
determining an image template to be acquired, positioning a target sub-interface similar to the image template on the required main interface, and automatically restoring a shielding part or a part exceeding the screen boundary of the target sub-interface;
and intelligently segmenting the target sub-interface to obtain a single character image, and inputting the single character image into a pre-trained character classification discriminator to obtain the identified data to be acquired.
2. The method of claim 1, wherein identifying a desired primary interface from a pre-trained primary interface discriminator and automatically presenting the desired primary interface on a device to be acquired comprises:
intercepting a main interface of the equipment to be collected;
inputting the intercepted main interface into a pre-trained main interface discriminator, and judging whether the intercepted main interface is a required main interface or not;
when the intercepted main interface is the required main interface, automatically presenting the required main interface;
when the intercepted main interface is a desktop interface, automatically searching and presenting the required main interface through a keyboard and mouse controller;
and when the intercepted main interface is other software interfaces, switching the interface to the desktop interface through the keyboard and mouse controller, and automatically searching and presenting the required main interface through the keyboard and mouse controller.
3. The method of claim 1, before identifying the desired home interface based on the pre-trained home interface arbiter, further comprising:
intercepting a preset number of main interfaces, wherein the preset number of main interfaces comprise a required main interface, a desktop interface and other software interfaces;
dividing the preset number of main interfaces into a training set, a testing set and a verification set;
and training the main interface discriminator according to the training set, the test set, the verification set and the classification neural network model to obtain the trained main interface discriminator.
4. The method of claim 1, wherein determining an image template to be captured, locating a target sub-interface similar to the image template on the desired main interface, and automatically restoring an occluded portion or a portion beyond a screen boundary of the target sub-interface comprises:
intercepting an image template to be acquired on the required main interface, and acquiring the length and width of the image template;
obtaining the similarity between each position in the required main interface and the image template according to the required main interface, the image template and a correlation coefficient matching algorithm;
taking the position coordinate with the maximum similarity as an initial coordinate, taking the initial coordinate as a starting point, and intercepting a sub-image with the length and the width equal to those of the image template on a required main interface;
calculating HSV (hue, saturation and value) matching confidence of the subimages and the image template, wherein if the HSV matching confidence is greater than or equal to a preset threshold, the subimages are target subinterfaces;
if the HSV matching confidence coefficient is smaller than a preset threshold value, the sub-image is shielded or exceeds the screen boundary, the shielded part or the part exceeding the screen boundary of the sub-image is automatically restored, and if the HSV matching confidence coefficient of the restored sub-image is larger than or equal to the preset threshold value, the restored sub-image is a target sub-interface.
5. The method of claim 4, wherein obtaining the similarity between each position in the desired main interface and the image template according to the desired main interface, the image template and a correlation coefficient matching algorithm comprises:
graying the required main interface and the image template to respectively obtain a gray matrix;
and inputting the initial point coordinate of the required main interface, the initial point coordinate of the image template, the gray matrix of the required main interface and the gray matrix of the image template into a correlation coefficient matching algorithm to obtain the similarity between each position in the required main interface and the image template.
6. The method according to claim 4, wherein automatically restoring the blocked part or the part beyond the screen boundary of the sub-image, and if the HSV matching confidence of the restored sub-image is greater than or equal to a preset threshold, the restored sub-image is a target sub-interface, and the method comprises the following steps:
clicking the part of the subimage which is not shielded through a keyboard and mouse controller to obtain a restored subimage;
calculating HSV (hue, saturation and value) matching confidence of the restored sub-image and the image template, and if the HSV matching confidence is greater than or equal to a preset threshold value, taking the restored sub-image as a target sub-interface;
if the HSV matching confidence is smaller than a preset threshold, uniformly dividing the required main interface into four quadrants by taking the center of the required main interface as an origin, dragging the subimage by using a keyboard and mouse controller, and moving the subimage to the centrosymmetric quadrant direction of the quadrant in which the subimage is positioned by a preset distance to obtain a restored subimage;
and calculating HSV (hue, saturation and value) matching confidence of the restored sub-image and the image template, wherein if the HSV matching confidence is greater than or equal to a preset threshold, the restored sub-image is a target sub-interface.
7. The method of claim 1, wherein intelligently segmenting the target sub-interface to obtain a single character image comprises:
converting the target sub-interface into a gray scale image to obtain a gray scale matrix corresponding to the target sub-interface;
converting the gray scale image into a binary image according to the gray scale matrix corresponding to the target sub-interface;
and accumulating the pixel values of each column and the pixel values of each row in the binary image, and dividing a single character image by taking the junction of the abrupt change point of the accumulated pixel values of each column and the abrupt change point of the pixel values of each row as a dividing point.
8. An apparatus for data acquisition of an equipment interface, comprising:
the interface automatic identification and presentation module is used for identifying a required main interface according to a pre-trained main interface discriminator and automatically presenting the required main interface on the equipment to be acquired;
the target sub-interface determining module is used for determining an image template to be acquired, positioning a target sub-interface similar to the image template on the required main interface, and automatically restoring the shielded part or the part exceeding the screen boundary of the target sub-interface;
and the interface data acquisition module is used for intelligently segmenting the target sub-interface to obtain a single character image, and inputting the single character image into a pre-trained character classification discriminator to obtain the identified data to be acquired.
9. A data acquisition device for a device interface, comprising a processor and a memory storing program instructions, the processor being configured to perform the data acquisition method for a device interface according to any one of claims 1 to 7 when executing the program instructions.
10. A computer readable medium having computer readable instructions stored thereon which are executed by a processor to implement a method of data acquisition for an equipment interface as claimed in any one of claims 1 to 7.
CN202211110854.6A 2022-09-13 2022-09-13 Data acquisition method, device, equipment and storage medium for equipment interface Pending CN115629831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211110854.6A CN115629831A (en) 2022-09-13 2022-09-13 Data acquisition method, device, equipment and storage medium for equipment interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211110854.6A CN115629831A (en) 2022-09-13 2022-09-13 Data acquisition method, device, equipment and storage medium for equipment interface

Publications (1)

Publication Number Publication Date
CN115629831A true CN115629831A (en) 2023-01-20

Family

ID=84903277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211110854.6A Pending CN115629831A (en) 2022-09-13 2022-09-13 Data acquisition method, device, equipment and storage medium for equipment interface

Country Status (1)

Country Link
CN (1) CN115629831A (en)

Similar Documents

Publication Publication Date Title
CN110348441B (en) Value-added tax invoice identification method and device, computer equipment and storage medium
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
CN109858476B (en) Tag expansion method and electronic equipment
CN113111844B (en) Operation posture evaluation method and device, local terminal and readable storage medium
US8542912B2 (en) Determining the uniqueness of a model for machine vision
CN110889437B (en) Image processing method and device, electronic equipment and storage medium
CN104573675A (en) Operating image displaying method and device
CN112927776A (en) Artificial intelligence automatic interpretation system for medical inspection report
US8542905B2 (en) Determining the uniqueness of a model for machine vision
CN113222913A (en) Circuit board defect detection positioning method and device and storage medium
CN113608805B (en) Mask prediction method, image processing method, display method and device
CN115082659A (en) Image annotation method and device, electronic equipment and storage medium
CN116309963B (en) Batch labeling method and device for images, electronic equipment and storage medium
US20230401809A1 (en) Image data augmentation device and method
CN111062388A (en) Advertisement character recognition method, system, medium and device based on deep learning
CN110765917A (en) Active learning method, device, terminal and medium suitable for face recognition model training
CN115629831A (en) Data acquisition method, device, equipment and storage medium for equipment interface
CN116246161A (en) Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge
CN115631374A (en) Control operation method, control detection model training method, device and equipment
CN113504865A (en) Work order label adding method, device, equipment and storage medium
CN112435274A (en) Remote sensing image planar ground object extraction method based on object-oriented segmentation
CN112131418A (en) Target labeling method, target labeling device and computer-readable storage medium
Biadgie et al. Speed-up feature detector using adaptive accelerated segment test
CN116166889B (en) Hotel product screening method, device, equipment and storage medium
CN114792295B (en) Method, device, equipment and medium for correcting blocked object based on intelligent photo frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination