CN112199997A - Terminal and tool processing method - Google Patents

Terminal and tool processing method Download PDF

Info

Publication number
CN112199997A
CN112199997A CN202010930147.6A CN202010930147A CN112199997A CN 112199997 A CN112199997 A CN 112199997A CN 202010930147 A CN202010930147 A CN 202010930147A CN 112199997 A CN112199997 A CN 112199997A
Authority
CN
China
Prior art keywords
image
tools
tool
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010930147.6A
Other languages
Chinese (zh)
Inventor
王续澎
徐晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN202010930147.6A priority Critical patent/CN112199997A/en
Publication of CN112199997A publication Critical patent/CN112199997A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a terminal and a tool processing method, and relates to the technical field of artificial intelligence, wherein the terminal of the embodiment comprises a camera for collecting images; the processor is used for responding to a first tool shooting instruction and acquiring a first image acquired by the camera, wherein the first image comprises images of all tools which are arranged after the tools are used; inputting the first image into a trained neural network model to determine an identifier of a first tool in the first image, wherein the identifier of the first tool is information representing the type of the first tool; determining the number of first tools contained in the first image according to the identification of the first tools; and comparing the number of the first tools with the number of the second tools contained in the second image, wherein the second image is an image containing all the tools before the tools are used, and whether the tools are lost after the tools are used can be accurately and conveniently judged according to a comparison result.

Description

Terminal and tool processing method
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a terminal and a tool processing method.
Background
When a constructor works, the constructor generally needs to carry various types of construction tools, and the tools are matched to finish construction. And after the construction is finished, checking the tools, determining that no missing tools exist, and then withdrawing from the construction site.
In the correlation technique, the tool is checked by constructors, namely whether the tool is complete or not is manually compared, and omission or not is avoided.
However, manual alignment is inefficient and prone to errors, making it difficult to accurately determine which tools are missing.
Disclosure of Invention
The invention provides a terminal and a tool processing method, which are used for efficiently and accurately determining whether tools are lost after construction is finished.
In a first aspect, an embodiment of the present invention provides a terminal, where the terminal includes: a camera and a processor;
the camera is used for collecting images;
the processor is used for responding to a shooting instruction of the first tool and acquiring a first image acquired by the camera; inputting the first image into a trained neural network model to determine an identity of a first tool in the first image; determining the number of first tools contained in the first image according to the identification of the first tools; comparing the number of the first tools with the number of the second tools contained in the second image, and judging whether tools are lacked or not according to a comparison result;
the first image is an image containing all tools sorted after the tools are used, the identifier of the first tool is information representing the first tool type, and the second image is an image containing all tools before the tools are used.
According to the scheme, the first image containing all tools used by the tools for finishing is input into the trained neural network model, so that the identifier of the first tool contained in the first image is obtained; determining the number of each first tool contained in the first image according to the identification of the first tools, namely the number of each tool used for finishing the tools; the number of each tool after the tools are used and the number of each tool before the tools are used are compared, and whether the tools are lost after the tools are used can be accurately and efficiently judged according to the comparison result.
In some exemplary embodiments, the output of the trained neural network model further includes location information corresponding to the identifier of the first tool, and the terminal further includes a display screen;
the display screen is used for displaying a user interface;
the processor is further configured to display, through a display screen, an identifier of a first tool and corresponding position information after the first image is input into the trained neural network model and before the number of the first tools included in the first image is determined; or marking the first image according to the identification of the first tool and the corresponding position information, and displaying the marked first image through the display screen; and modifying the identification of the first tool in response to the identification modification instruction.
According to the scheme, the identification of the first tool in the first image output by the trained neural network model is not necessarily identical to the identification of the first tool actually contained in the first image, and the identification of the first tool and the corresponding position information are displayed through the display screen; or displaying the first image marked according to the identification of the first tool and the corresponding position information through a display screen; and responding to the identification modification instruction, and modifying the identification of the wrong first tool in the first image output by the trained neural network model, so as to determine the more accurate number of the first tools, and further improve the accuracy of judging whether the tools are lacked.
In some exemplary embodiments, the processor is further configured to, prior to inputting the first image into the trained neural network model,
determining a segmentation line according to a second identifier in the first image, wherein the second identifier is an identifier preset between sorted tools before the first image is acquired;
segmenting the first image based on the segmentation line.
According to the scheme, the first image is divided into the plurality of areas, and the plurality of areas are input into the trained neural network model, so that the identification of the first tool in the first image can be determined more accurately.
In some exemplary embodiments, if there are a plurality of trained neural network models, the processor is further configured to:
before the first image is input into the trained neural network model, determining a target neural network model corresponding to the image mode of the first image carried by the first tool shooting instruction based on a preset corresponding relation between the trained neural network model and the mode;
the processor is specifically configured to:
inputting the first image into the target neural network model.
According to the scheme, the target neural network model corresponding to the image mode of the first image carried by the first tool shooting instruction is determined, the first image is input into the target neural network model, and the identification of the first tool in the first image can be determined more accurately.
In some exemplary embodiments, the processor is further configured to:
after judging whether tools are lacked or not, if the tools are judged to be lacked, sending the information carrying the identification of the lacked tools and the lacked quantity in a preset notification mode.
According to the scheme, if tools are judged to be absent, the information carrying the identification of the absent tools and the absent quantity is sent in a preset notification mode, the tools absent by a user can be reminded, and the probability that the tools are lost on a construction site is reduced.
In a second aspect, an embodiment of the present invention provides a tool processing method, including:
responding to a first tool shooting instruction, and acquiring a first image acquired through a camera;
inputting the first image into a trained neural network model to determine an identity of a first tool in the first image;
determining the number of first tools contained in the first image according to the identification of the first tools;
comparing the number of the first tools with the number of the second tools contained in the second image, and judging whether tools are lacked or not according to a comparison result;
the first image is an image containing all tools sorted after the tools are used, the identifier of the first tool is information representing the first tool type, and the second image is an image containing all tools before the tools are used.
In some exemplary embodiments, the outputting of the trained neural network model further includes location information corresponding to the identifier of the first tool, and after inputting the first image into the trained neural network model, before determining the number of first tools included in the first image, further includes:
displaying the identification of the first tool and the corresponding position information through a display screen; or marking the first image according to the identification of the first tool and the corresponding position information, and displaying the marked first image through the display screen;
and modifying the identification of the first tool in response to the identification modification instruction.
In some exemplary embodiments, before inputting the first image into the trained neural network model, the method further includes:
determining a segmentation line according to a second identifier in the first image, wherein the second identifier is an identifier preset between sorted tools before the first image is acquired;
segmenting the first image based on the segmentation line.
In some exemplary embodiments, before inputting the first image into the trained neural network model, if there are a plurality of trained neural network models, the method further includes:
determining a target neural network model corresponding to an image mode of a first image carried by the first tool shooting instruction based on a preset corresponding relation between a trained neural network model and the mode;
the inputting the first image into the trained neural network model includes:
inputting the first image into the target neural network model.
In some exemplary embodiments, after determining whether the tool is absent, further comprising:
and if the tool is judged to be lacked, sending the information carrying the identification of the lacked tool and the lacked quantity in a preset notification mode.
In a third aspect, an embodiment of the present invention provides a tool processing apparatus, including:
the acquisition module is used for responding to a first tool shooting instruction and acquiring a first image acquired by a camera;
a determining module, configured to input the first image into a trained neural network model to determine an identifier of a first tool in the first image;
the determining module is further configured to determine, according to the identifier of the first tool, the number of first tools included in the first image;
the judging module is also used for comparing the number of the first tools with the number of the second tools contained in the second image and judging whether tools are lacked or not according to a comparison result;
the first image is an image containing all tools sorted after the tools are used, the identifier of the first tool is information representing the first tool type, and the second image is an image containing all tools before the tools are used.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the tool processing method according to any one of the second aspects.
In addition, for technical effects brought by any one implementation manner of the second aspect to the fourth aspect, reference may be made to technical effects brought by different implementation manners of the first aspect, and details are not described here.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a block diagram of a hardware configuration of a terminal according to an embodiment of the present invention;
fig. 2 is a block diagram of a software structure of a terminal according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart diagram of a first method for tool processing according to an embodiment of the present invention;
FIG. 4A is a schematic diagram of a first user interface provided in an embodiment of the present invention;
FIG. 4B is a schematic diagram of a second user interface provided in the embodiment of the present invention;
FIG. 5 is a schematic diagram of a third user interface provided by the embodiment of the invention;
FIG. 6 is a schematic flow chart diagram of a second method for tool processing according to an embodiment of the present invention;
FIG. 7A is a schematic diagram of a fourth user interface provided in the embodiments of the present invention;
FIG. 7B is a schematic diagram of a fifth user interface provided in the embodiment of the present invention;
FIG. 8 is a schematic flow chart diagram of a third method for tool processing according to an embodiment of the present invention;
FIG. 9 is a schematic view of a parting line according to an embodiment of the present invention;
FIG. 10 is a schematic flow chart diagram illustrating a fourth method for tool processing according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a sixth user interface provided in the embodiments of the present invention;
FIG. 12 is a schematic view of a tool processing apparatus according to an embodiment of the present invention;
fig. 13 is a schematic diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" in the embodiments of the present invention describes an association relationship of associated objects, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless otherwise specified.
In the description of the present application, it is to be noted that, unless otherwise explicitly stated or limited, the term "connected" is to be understood broadly, and may for example be directly connected, indirectly connected through an intermediate medium, or be a communication between two devices. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
After the construction is finished, constructors need to check the tools, check whether the tools are complete or not, and determine whether the tools are missed or not, and then leave the construction site after determining that the tools are not missed.
However, the operator is very likely to count errors and it is difficult to determine exactly which tools are missing.
In order to accurately determine whether a tool is lost after construction is finished, the embodiment of the invention provides a terminal and a tool processing method, wherein a first image containing all tools used for tool finishing is input into a trained neural network model to obtain an identifier of the first tool contained in the first image; determining the number of each first tool contained in the first image according to the identification of the first tools, namely the number of each tool used for finishing the tools; the number of each tool after the tools are used and the number of each tool before the tools are used are compared, and whether the tools are lost after being used can be accurately and quickly judged according to the comparison result.
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 shows a block diagram of a hardware configuration of a terminal 100.
The following describes an embodiment specifically by taking the terminal 100 as an example. It should be understood that the terminal 100 shown in fig. 1 is merely an example, and that the terminal 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
As shown in fig. 1, the terminal 100 includes: radio Frequency (RF) circuit 110, memory 120, display unit 130, camera 140, sensor 150, audio circuit 160, wireless fidelity (Wi-Fi) module 170, processor 180, bluetooth module 181, and power supply 190.
The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 180 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 120 may be used to store software programs and data. The processor 180 performs various functions of the terminal 100 and data processing by executing software programs or data stored in the memory 120. The memory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 120 stores an operating system that enables the terminal 100 to operate. The memory 120 may store an operating system and various application programs, and may also store codes for performing the methods described in the embodiments of the present application.
The display unit 130 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the terminal 100, and particularly, the display unit 130 may include a touch screen 131 disposed on the front surface of the terminal 100 and may collect touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The display unit 130 may also be used to display a Graphical User Interface (GUI) of information input by or provided to the user and various menus of the terminal 100. Specifically, the display unit 130 may include a display screen 132 disposed on the front surface of the terminal 100. The display screen 132 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 130 may be used to display various graphical user interfaces described herein.
The touch screen 131 may cover the display screen 132, or the touch screen 131 and the display screen 132 may be integrated to implement the input and output functions of the terminal 100, and after the integration, the touch screen may be referred to as a touch display screen for short. In the present application, the display unit 130 may display the application programs and the corresponding operation steps.
The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to the processor 180 for conversion into digital image signals.
The terminal 100 may further comprise at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The terminal 100 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, etc.
Audio circuitry 160, speaker 161, and microphone 162 may provide an audio interface between a user and terminal 100. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The terminal 100 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 160, and outputs the audio data to the RF circuit 110 to be transmitted to, for example, another terminal or outputs the audio data to the memory 120 for further processing. In this application, the microphone 162 may capture the voice of the user.
Wi-Fi belongs to a short-distance wireless transmission technology, and the terminal 100 can help a user to send and receive e-mails, browse webpages, access streaming media, and the like through the Wi-Fi module 170, and provides wireless broadband internet access for the user. Information interaction can also be carried out with other equipment with the Wi-Fi module through the Wi-Fi module.
The processor 180 is a control center of the terminal 100, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, processor 180 may include one or more processing units; the processor 180 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 180. In the present application, the processor 180 may run an operating system, an application program, a user interface display, and a touch response, and the processing method described in the embodiments of the present application. Additionally, the processor 180 and the display unit 130 may be coupled.
And the bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol.
The terminal 100 also includes a power supply 190 (e.g., a battery) to power the various components. The power supply may be logically connected to the processor 180 through a power management system to manage charging, discharging, power consumption, etc. through the power management system. The terminal 100 may also be configured with power buttons for powering the terminal on and off, and locking the screen.
Fig. 2 is a block diagram of a software configuration of the terminal 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide a communication function of the terminal 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the terminal vibrates, an indicator light flashes, and the like.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the terminal 100 software and hardware in connection with capturing a photo scene.
When the touch screen 131 receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 140.
The terminal 100 in the embodiment of the present application may be a mobile phone, a tablet computer, a wearable device, a notebook computer, a television, and the like.
Fig. 3 is a schematic flowchart of a first tool processing method according to an embodiment of the present invention, which is applied to the terminal, and as shown in fig. 3, the method may include:
step 301: and responding to the shooting instruction of the first tool, and acquiring a first image acquired through the camera.
Wherein the first image is an image containing all tools sorted after the tools are used.
In this embodiment, after the tools are used, images of all the tools to be sorted are collected, and tools in the images are identified, so that all the tools to be sorted after being used can be accurately and quickly determined.
In some embodiments, the captured image may be presented via a user interface, such that the user may determine whether the captured image includes all of the tools that are to be sorted after use of the tools, thereby preventing some of the tools from being outside the image that are not certain to use all of the tools that are to be sorted after use. The first tool shooting instruction may be triggered by, but not limited to:
referring to fig. 4A, the upper portion of the user interface displays an image collected by the camera, the lower portion is provided with a "photograph" button, and the user touches the "photograph" button to trigger the first tool to photograph the command.
Fig. 4A is only an example of a possible implementation manner of the user interface, and other similar user interfaces may also be used, for example, the trigger key of the first tool shooting instruction may also be a "shooting" key, or a circular icon key, and details thereof are not repeated here.
Step 302: inputting the first image into a trained neural network model to determine an identity of a first tool in the first image.
Wherein the identification of the first tool is information characterizing the first tool type.
In this embodiment, after the first image including all the tools used by the tool and finished in the using process is obtained, the tool in the first image can be accurately identified through the trained neural network model, and based on this, the first image needs to be input into the trained neural network model.
In this embodiment, the first image is input into the trained neural network model, the neural network model performs feature extraction, and an identifier of the first tool in the first image is output, where the identifier of the first tool may be a name of the first tool, such as a wire stripper, a split wrench, a wire cutter, or a serial number of the first tool.
The obtained first image may have the problems of unclear shooting, large offset angle and the like, and at this time, the user may want to re-acquire the first image and directly input the first image with shooting errors into the trained neural network model, so that the calculation amount of the trained neural network model is increased, and the recognition efficiency is affected. Thus, in some embodiments, the first image is input to the trained neural network model upon receiving the tool recognition command.
Referring to fig. 4B, an "identify" button is further disposed on a lower portion of the user interface, and a user touches the "identify" button to trigger a tool identification command.
Fig. 4B is only an example of possible implementation manners of the user interface, for example, the trigger key of the first tool shooting instruction and the tool recognition instruction may be implemented in other manners, and details are not described here.
The trained neural network model can be obtained by, but not limited to, training in the following manner:
and taking the sample image and the identification of the tool actually contained in the sample image as input, taking the prediction result as output, and training the initial neural network model to obtain the trained neural network model. The initial neural network model can adopt a new version of detection network, so that the identification precision and the running speed are greatly improved. The sample image is an image which is shot under the conditions of different angles, distances, environments, illumination and the like and contains the arrangement and combination of different tools, so that the applicability of the model is ensured.
Step 303: and determining the number of first tools contained in the first image according to the identification of the first tools.
In this embodiment, after obtaining the identifier of the first tool in the first image, the number of each first tool included in the first image needs to be obtained through statistics, so that the first tool can be compared with the number of the first tools before the tools are used. For example:
the first tool in the first image is marked by a wire stripper, an opening wrench, a wire cutter, an opening wrench and a hand hammer, and the number of each tool which is counted and arranged after the tools are used is respectively: 1 wire stripper, 2 opening plates, 1 wire cutter and 1 hand hammer.
The first tool is merely an example and is not a limitation of the present embodiment.
Step 304: and comparing the number of the first tools with the number of the second tools contained in the second image, and judging whether tools are lacked or not according to a comparison result.
Wherein the second image is an image containing all tools before the tool is used.
In this embodiment, the manner of determining the number of the second tools included in the second image may refer to the manner of determining the number of the first tools, and details thereof are not repeated here.
After the number of each type of first tool contained in the first image is obtained, the number of each type of first tool contained in the first image is compared with the number of each type of first tool contained in the second image, and whether the tools are lost after being used can be accurately judged. For example:
each first tool is: 1 wire stripper, 2 opening plates, 1 wire cutter and 1 hand hammer; each second tool is: 1 wire stripper, 2 opening wrenches, 1 wire cutter, 1 hand hammer and 1 screwdriver, the instrument has lacked a screwdriver after using.
According to the scheme, the first image containing all tools used by the tools for finishing is input into the trained neural network model, so that the identifier of the first tool contained in the first image is obtained; determining the number of each first tool contained in the first image according to the identification of the first tools, namely the number of each tool used for finishing the tools; the number of each tool after the tools are used and the number of each tool before the tools are used are compared, and whether the tools are lost after the tools are used can be accurately and conveniently judged according to the comparison result.
In some embodiments, after determining whether a tool is absent, if it is determined that a tool is absent, information carrying an identifier of the absent tool and the absent quantity may be sent in a preset notification manner.
Referring to fig. 5, the user interface shows the name of the missing tool (screwdriver) and the number of missing tools (1).
Fig. 5 is only an example of a possible implementation manner of the user interface, and other user interfaces showing the identification of the missing tool and the missing number may be adopted in the embodiment.
According to the scheme, if tools are judged to be absent, the information carrying the identification of the absent tools and the absent quantity is sent in a preset notification mode, the tools absent by a user can be reminded, and the probability that the tools are lost on a construction site is reduced.
Fig. 6 is a schematic flowchart of a second tool processing method according to an embodiment of the present invention, which is applied to the terminal, and as shown in fig. 6, the method may include:
step 601: and responding to the shooting instruction of the first tool, and acquiring a first image acquired through the camera.
Wherein the first image is an image containing all tools sorted after the tools are used.
Step 601 is the same as the implementation of step 301, and is not described herein again.
Step 602: inputting the first image into a trained neural network model to determine an identity of a first tool in the first image.
Wherein the identification of the first tool is information characterizing the first tool type.
In this embodiment, the output of the trained neural network model further includes location information corresponding to the identifier of the first tool. The specific implementation manner of the position information is not limited in this embodiment, and for example, the specific implementation manner may be coordinates of four corners of the area where the first tool is located.
The trained neural network model can be obtained by, but not limited to, training in the following manner:
and taking the sample image, the identification of the tool actually contained in the sample image and the coordinates of four corners of the rectangular area where the tool is located as input, taking the prediction result as output, and training the initial neural network model to obtain the trained neural network model.
Step 603: displaying the identification of the first tool and the corresponding position information through a display screen; or according to the identification of the first tool and the corresponding position information, marking is carried out in the first image, and the marked first image is displayed through the display screen.
In this embodiment, the identifier of the first tool in the first image determined by the trained neural network model is not necessarily identical to the identifier of the first tool actually included in the first image, that is, the trained neural network model may be recognized incorrectly. Based on this, a certain mode needs to be adopted to show the recognition result, so that the wrong identification can be modified. Can be demonstrated in the following way:
1) the identification of the first tool determined by the trained neural network model and the corresponding location information may be directly displayed in a list form.
2) Or determining the areas of the marks in the first image according to the position information corresponding to the marks of the first tool, framing the areas of the first image by using a frame line, and marking the corresponding marks around the framed areas of the frame line.
Through the display in the mode, the user can clearly know which tool in the image corresponds to the identifications determined by the trained neural network model, and judge which tool is wrong in identification.
3) In some embodiments, the output result of the trained neural network model further includes a prediction accuracy rate for the identifier of each first tool, and the identifier of the first tool determined by the trained neural network model, the corresponding location information, and the corresponding prediction accuracy rate may also be displayed in a list form; or as shown in fig. 7A, the identifiers and the corresponding position information of all the first tools determined by the trained neural network model are displayed in a list form, and the corresponding prediction accuracy lower than the preset accuracy threshold is shown, for example, the accuracy threshold is 85%, the prediction accuracy of the hammer is 70%, and the accuracy is lower than the accuracy threshold, so that the identifier of the first tool with a high prediction error probability can be more prominently displayed.
4) In some embodiments, the step of determining the number of first tools included in the first image may be performed first, and the number of each first tool may be displayed by a list; and determining the areas of the marks in the first image according to the position information corresponding to the marks of the first tool, framing the areas of the first image by using a frame line, and printing the corresponding marks around the framed areas of the frame line. Referring to fig. 7B, the user interface displays a first image with a corresponding mark on the top of the framed area in the upper portion and a list showing the number of each first tool in the lower portion.
Therefore, the user can clearly know which tool and the number of each tool respectively correspond to the identifications determined by the trained neural network model, and judge which tool is wrong in identification.
The above display manners and the user interfaces are all exemplified and not limited to the embodiment, and in addition, the above display manners may also be combined, and are not described herein again.
Step 604: and modifying the identification of the first tool in response to the identification modification instruction.
As described above, after the recognition result is displayed, the wrong identifier can be modified, and no matter which display mode is adopted, the user can touch and delete the identifier, and simultaneously can key in a new identifier to trigger an identifier modification instruction.
In this embodiment, when the terminal receives an identifier modification instruction for a certain identifier, the identifier is replaced with a new identifier carried in the identifier modification instruction.
Step 605: and determining the number of first tools contained in the first image according to the identification of the first tools.
This step 605 may refer to an implementation of step 303 described above.
In addition, if the number of first tools contained in the first image has been determined before step 603, step 605 is to re-determine the number of first tools contained in the first image.
Step 606: and comparing the number of the first tools with the number of the second tools contained in the second image, and judging whether tools are lacked or not according to a comparison result.
Wherein the second image is an image containing all tools before the tool is used.
This step 606 is the same as the implementation of the step 304, and is not described here again.
According to the scheme, the identification of the first tool in the first image output by the trained neural network model is not necessarily identical to the identification of the first tool actually contained in the first image, and the identification of the first tool and the corresponding position information are displayed through the display screen; or displaying the first image marked according to the identification of the first tool and the corresponding position information through a display screen; and responding to the identification modification instruction, and modifying the identification of the wrong first tool in the first image output by the trained neural network model, so as to determine the more accurate number of the first tools, and further improve the accuracy of judging whether the tools are lacked.
Fig. 8 is a schematic flowchart of a third method for processing a tool according to an embodiment of the present invention, which is applied to the terminal, and the method includes:
step 801: and responding to the shooting instruction of the first tool, and acquiring a first image acquired through the camera.
Wherein the first image is an image containing all tools sorted after the tools are used.
This step 801 is the same as the implementation of step 301, and is not described herein again.
Step 802: determining a dividing line according to a second identifier in the first image; segmenting the first image based on the segmentation line.
Wherein the second identifier is an identifier preset between collated tools prior to acquiring the first image.
Generally, a number of tools are carried by a constructor, so that the first image may include a number of tools, and the recognition accuracy of the trained neural network model is affected by directly inputting the image including the number of tools into the trained neural network model. Based on this, the first image may be segmented prior to input. The parting line may be determined by, but is not limited to, the following several ways:
1) the second mark is a special symbol mark preset between the sorted tools, and as shown in fig. 9, the second mark is taken as a dot for description (other symbols, such as a triangle, may also be used in this embodiment), and a vertical line passing through a center point of the second mark is taken as a dividing line.
In addition, the second mark may be a special color (color not available in the tool) for easier detection.
2) The second identifier may also be a line segment of a special color preset between the sorted tools, which is taken as a dividing line, or which is determined according to the line segment and an extension of the line segment.
3) If the first tool to be sorted is placed on the tool pack, i.e. the background of the first image is the tool pack, the boundary of each preset area of the tool pack can be directly used as a dividing line.
The above methods for determining the dividing line are only examples, and other dividing lines that do not divide the tool in the image may be used in the embodiment.
In some specific embodiments, if the first image is a rectangular background, such as a tool bag or a4 paper, the first image may be perspective-transformed according to four corners of the background by an Open Source Computer Vision Library (OpenCV), and when the perspective-transformed first image is segmented, the tool in the first image is not easily segmented.
Step 803: inputting the first image into a trained neural network model to determine an identity of a first tool in the first image.
Wherein the identification of the first tool is information characterizing the first tool type.
Step 804: and determining the number of first tools contained in the first image according to the identification of the first tools.
Step 805: and comparing the number of the first tools with the number of the second tools contained in the second image, and judging whether tools are lacked or not according to a comparison result.
The steps 803-805 are the same as the steps 302-304, and will not be described herein again.
According to the scheme, the first image is divided into the plurality of areas, the plurality of areas are input into the trained neural network model, the trained neural network model only needs to identify fewer tools each time, interference of permutation and combination of different tools is reduced, and therefore the identification of the first tool in the first image can be determined more accurately.
Fig. 10 is a schematic flowchart of a fourth tool processing method according to an embodiment of the present invention, which is applied to the terminal, and the method includes:
step 1001: and responding to the shooting instruction of the first tool, and acquiring a first image acquired through the camera.
Wherein the first image is an image containing all tools sorted after the tools are used.
Step 1001 is the same as step 301, and will not be described herein again.
Step 1002: and determining a target neural network model corresponding to the image mode of the first image carried by the first tool shooting instruction based on the preset corresponding relation between the trained neural network model and the mode.
The first tool after arrangement is placed on different backgrounds, and the trained neural network model may make different judgments. Based on the above, sample images with different backgrounds can be adopted to obtain a plurality of neural network models through training, so that a proper trained neural network model can be selected according to the background of the first image. For example:
taking sample images with a tool pack as a background and the identification of the tool actually contained in the sample images as input, taking a prediction result as output, and training the initial neural network model to obtain a trained first neural network model; and taking sample images which are not the background of the tool pack and the identification of the tool actually contained in the sample images as input, taking the prediction result as output, and training the initial neural network model to obtain a trained second neural network model.
Referring to fig. 11, a "normal mode" and a "toolkit mode" are displayed on the user interface, if the user touches a "normal mode" key, the image of the first image is the normal mode, and the corresponding target neural network is the second neural network; if the user touches the "toolkit mode" key, the image of the first image is the toolkit mode, and the corresponding target neural network is the first neural network.
The neural network models and user interfaces are exemplary only and not intended as limitations on the present embodiments.
Step 1003: inputting the first image into a target neural network model to determine an identity of a first tool in the first image.
Wherein the identification of the first tool is information characterizing the first tool type.
This step 1003 may refer to the implementation manner of the step 302, which is not described herein again.
Step 1004: and determining the number of first tools contained in the first image according to the identification of the first tools.
Step 1005: and comparing the number of the first tools with the number of the second tools contained in the second image, and judging whether tools are lacked or not according to a comparison result.
The implementation of steps 1004-1005 is the same as that of steps 303-304, and will not be described herein again.
According to the scheme, the target neural network model corresponding to the image mode of the first image carried by the first tool shooting instruction is determined, the first image is input into the target neural network model, and the identification of the first tool in the first image can be determined more accurately.
In some embodiments, the terminal may be only used to acquire an image and display the image, the steps of the method embodiment are executed by the server, and specific implementation manners may refer to the above embodiments and are not described herein again.
As shown in fig. 12, based on the same inventive concept, an embodiment of the present invention provides a tool processing apparatus 1200, including: an obtaining module 1201, a determining module 1202 and a judging module 1203.
An obtaining module 1201, configured to respond to a first tool shooting instruction, and obtain a first image acquired by a camera;
a determining module 1202, configured to input the first image into the trained neural network model to determine an identifier of a first tool in the first image;
the determining module 1202 is further configured to determine, according to the identifier of the first tool, the number of first tools included in the first image;
the judging module 1203 is further configured to compare the number of the first tools with the number of the second tools included in the second image, and judge whether a tool is lacked according to a comparison result;
the first image is an image containing all tools sorted after the tools are used, the identifier of the first tool is information representing the first tool type, and the second image is an image containing all tools before the tools are used.
In some exemplary embodiments, the output of the trained neural network model further includes location information corresponding to the identity of the first tool, and the determining module 1202 is further configured to:
after inputting the first image into the trained neural network model, before determining the number of first tools contained in the first image,
displaying the identification of the first tool and the corresponding position information through a display screen; or marking the first image according to the identification of the first tool and the corresponding position information, and displaying the marked first image through the display screen;
and modifying the identification of the first tool in response to the identification modification instruction.
In some exemplary embodiments, the determining module 1202 is further configured to: before the first image is input into a trained neural network model, determining a segmentation line according to a second identifier in the first image, wherein the second identifier is an identifier preset between sorted tools before the first image is collected; and segmenting the first image based on the segmentation line.
In some exemplary embodiments, if there are a plurality of trained neural network models, the determining module 1202 is further configured to, before inputting the first image into the trained neural network models,
determining a target neural network model corresponding to an image mode of a first image carried by the first tool shooting instruction based on a preset corresponding relation between a trained neural network model and the mode;
the determining module 1202 inputs the first image into a trained neural network model, including:
inputting the first image into the target neural network model.
In some exemplary embodiments, the determining module 1203 is further configured to: after judging whether tools are lacked or not, if the tools are judged to be lacked, sending the information carrying the identification of the lacked tools and the lacked quantity in a preset notification mode.
Since the apparatus is the apparatus in the method in the embodiment of the present invention, and the principle of the apparatus for solving the problem is similar to that of the method, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 13, based on the same inventive concept, an embodiment of the present invention provides a terminal 1300, where the terminal 1300 includes: a processor 1301 and a memory 1302, wherein the memory 1302 stores program code, which when executed by the processor 1301, causes the processor to perform the following:
responding to a first tool shooting instruction, and acquiring a first image acquired through a camera;
inputting the first image into a trained neural network model to determine an identity of a first tool in the first image;
determining the number of first tools contained in the first image according to the identification of the first tools;
comparing the number of the first tools with the number of the second tools contained in the second image, and judging whether tools are lacked or not according to a comparison result;
the first image is an image containing all tools sorted after the tools are used, the identifier of the first tool is information representing the first tool type, and the second image is an image containing all tools before the tools are used.
In some optional embodiments, the output of the trained neural network model further includes position information corresponding to the identifier of the first tool, and the processor 1301 is further configured to: after inputting the first image into the trained neural network model, before determining the number of first tools contained in the first image,
displaying the identification of the first tool and the corresponding position information through a display screen; or marking the first image according to the identification of the first tool and the corresponding position information, and displaying the marked first image through the display screen;
and modifying the identification of the first tool in response to the identification modification instruction.
In some optional embodiments, the processor 1301 is further configured to, before inputting the first image into the trained neural network model,
determining a segmentation line according to a second identifier in the first image, wherein the second identifier is an identifier preset between sorted tools before the first image is acquired;
segmenting the first image based on the segmentation line.
In some alternative embodiments, if there are multiple trained neural network models, the processor 1301 is further configured to:
before the first image is input into the trained neural network model, determining a target neural network model corresponding to the image mode of the first image carried by the first tool shooting instruction based on a preset corresponding relation between the trained neural network model and the mode;
the processor 1301 is specifically configured to:
inputting the first image into the target neural network model.
In some optional embodiments, the processor 1301 is further configured to:
after judging whether tools are lacked or not, if the tools are judged to be lacked, sending the information carrying the identification of the lacked tools and the lacked quantity in a preset notification mode.
Since the terminal is a terminal that executes the method in the embodiment of the present invention, and the principle of the terminal to solve the problem is similar to that of the method, the implementation of the terminal may refer to the implementation of the method, and repeated details are not repeated.
Embodiments of the present invention provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the tool processing method as described above. The readable storage medium may be a nonvolatile readable storage medium, among others.
The present application is described above with reference to block diagrams and/or flowchart illustrations of methods, apparatus (systems) and/or computer program products according to embodiments of the invention. It will be understood that one block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, and/or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
Accordingly, the subject application may also be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). Furthermore, the present application may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. In the context of this application, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A terminal, characterized in that the terminal comprises: a camera and a processor;
the camera is used for collecting images;
the processor is used for responding to a shooting instruction of the first tool and acquiring a first image acquired by the camera; inputting the first image into a trained neural network model to determine an identity of a first tool in the first image; determining the number of first tools contained in the first image according to the identification of the first tools; comparing the number of the first tools with the number of the second tools contained in the second image, and judging whether tools are lacked or not according to a comparison result;
the first image is an image containing all tools sorted after the tools are used, the identifier of the first tool is information representing the first tool type, and the second image is an image containing all tools before the tools are used.
2. The terminal of claim 1, wherein the output of the trained neural network model further comprises location information corresponding to an identification of the first tool, and wherein the terminal further comprises a display screen;
the display screen is used for displaying a user interface;
the processor is further configured to display, through a display screen, an identifier of a first tool and corresponding position information after the first image is input into the trained neural network model and before the number of the first tools included in the first image is determined; or marking the first image according to the identification of the first tool and the corresponding position information, and displaying the marked first image through the display screen; and modifying the identification of the first tool in response to the identification modification instruction.
3. The terminal of claim 1, wherein the processor is further configured to, prior to inputting the first image into the trained neural network model,
determining a segmentation line according to a second identifier in the first image, wherein the second identifier is an identifier preset between sorted tools before the first image is acquired;
segmenting the first image based on the segmentation line.
4. The terminal of claim 1, wherein if there are multiple trained neural network models, the processor is further configured to:
before the first image is input into the trained neural network model, determining a target neural network model corresponding to the image mode of the first image carried by the first tool shooting instruction based on a preset corresponding relation between the trained neural network model and the mode;
the processor is specifically configured to:
inputting the first image into the target neural network model.
5. The terminal of any of claims 1 to 4, wherein the processor is further configured to:
after judging whether tools are lacked or not, if the tools are judged to be lacked, sending the information carrying the identification of the lacked tools and the lacked quantity in a preset notification mode.
6. A method of tool processing, the method comprising:
responding to a first tool shooting instruction, and acquiring a first image acquired through a camera;
inputting the first image into a trained neural network model to determine an identity of a first tool in the first image;
determining the number of first tools contained in the first image according to the identification of the first tools;
comparing the number of the first tools with the number of the second tools contained in the second image, and judging whether tools are lacked or not according to a comparison result;
the first image is an image containing all tools sorted after the tools are used, the identifier of the first tool is information representing the first tool type, and the second image is an image containing all tools before the tools are used.
7. The method of claim 6, wherein the output of the trained neural network model further comprises location information corresponding to the identity of the first tool, and wherein after inputting the first image into the trained neural network model and before determining the number of first tools contained in the first image, further comprising:
displaying the identification of the first tool and the corresponding position information through a display screen; or marking the first image according to the identification of the first tool and the corresponding position information, and displaying the marked first image through the display screen;
and modifying the identification of the first tool in response to the identification modification instruction.
8. The method of claim 6, prior to inputting the first image into the trained neural network model, further comprising:
determining a segmentation line according to a second identifier in the first image, wherein the second identifier is an identifier preset between sorted tools before the first image is acquired;
segmenting the first image based on the segmentation line.
9. The method of claim 6, wherein prior to inputting the first image into the trained neural network model, if there are multiple trained neural network models, further comprising:
determining a target neural network model corresponding to an image mode of a first image carried by the first tool shooting instruction based on a preset corresponding relation between a trained neural network model and the mode;
the inputting the first image into the trained neural network model includes:
inputting the first image into the target neural network model.
10. The method of any one of claims 6 to 9, further comprising, after determining whether a tool is missing:
and if the tool is judged to be lacked, sending the information carrying the identification of the lacked tool and the lacked quantity in a preset notification mode.
CN202010930147.6A 2020-09-07 2020-09-07 Terminal and tool processing method Pending CN112199997A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010930147.6A CN112199997A (en) 2020-09-07 2020-09-07 Terminal and tool processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010930147.6A CN112199997A (en) 2020-09-07 2020-09-07 Terminal and tool processing method

Publications (1)

Publication Number Publication Date
CN112199997A true CN112199997A (en) 2021-01-08

Family

ID=74006482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010930147.6A Pending CN112199997A (en) 2020-09-07 2020-09-07 Terminal and tool processing method

Country Status (1)

Country Link
CN (1) CN112199997A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408355A (en) * 2021-05-20 2021-09-17 南昌大学 Micro-expression compression method based on three-branch decision and optical flow filtering mechanism
TWI842379B (en) * 2023-02-09 2024-05-11 蔡俊維 Automatic wire coding image recognition printing and terminal crimping equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103320A (en) * 2017-04-28 2017-08-29 常熟理工学院 Embedded medical data image recognition and integrated approach
US20180129865A1 (en) * 2016-11-08 2018-05-10 Nec Laboratories America, Inc. Action recognition system with landmark localization on objects in images using convolutional neural networks
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium
CN109726759A (en) * 2018-12-28 2019-05-07 北京旷视科技有限公司 Self-service method, apparatus, system, electronic equipment and computer-readable medium
CN109886092A (en) * 2019-01-08 2019-06-14 平安科技(深圳)有限公司 Object identifying method and its device
CN110738119A (en) * 2019-09-16 2020-01-31 深圳市国信合成科技有限公司 bill identification method, device, equipment and readable medium
CN111259893A (en) * 2020-01-19 2020-06-09 柳潆林 Intelligent tool management method based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180129865A1 (en) * 2016-11-08 2018-05-10 Nec Laboratories America, Inc. Action recognition system with landmark localization on objects in images using convolutional neural networks
CN107103320A (en) * 2017-04-28 2017-08-29 常熟理工学院 Embedded medical data image recognition and integrated approach
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium
CN109726759A (en) * 2018-12-28 2019-05-07 北京旷视科技有限公司 Self-service method, apparatus, system, electronic equipment and computer-readable medium
CN109886092A (en) * 2019-01-08 2019-06-14 平安科技(深圳)有限公司 Object identifying method and its device
CN110738119A (en) * 2019-09-16 2020-01-31 深圳市国信合成科技有限公司 bill identification method, device, equipment and readable medium
CN111259893A (en) * 2020-01-19 2020-06-09 柳潆林 Intelligent tool management method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴桐 等: "基于X射线的复杂结构件内部零件装配正确性检测", 《激光与光电子学进展》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408355A (en) * 2021-05-20 2021-09-17 南昌大学 Micro-expression compression method based on three-branch decision and optical flow filtering mechanism
CN113408355B (en) * 2021-05-20 2022-04-12 南昌大学 Micro-expression compression method based on three-branch decision and optical flow filtering mechanism
TWI842379B (en) * 2023-02-09 2024-05-11 蔡俊維 Automatic wire coding image recognition printing and terminal crimping equipment

Similar Documents

Publication Publication Date Title
CN111192005B (en) Government affair service processing method and device, computer equipment and readable storage medium
CN113223464A (en) Ink screen image display method and ink screen terminal
CN113473074B (en) Detection method, electronic equipment, detection equipment and storage medium
CN111225108A (en) Communication terminal and card display method of negative screen interface
CN111124219A (en) Communication terminal and card display method of negative screen interface
CN112199997A (en) Terminal and tool processing method
CN114020379B (en) Terminal equipment, information feedback method and storage medium
CN111176766A (en) Communication terminal and component display method
EP3929804A1 (en) Method and device for identifying face, computer program, and computer-readable storage medium
CN112099892B (en) Communication terminal and method for rapidly scanning two-dimension code
CN110650210B (en) Image data acquisition method, device and storage medium
CN111726605A (en) Resolving power determining method and device, terminal equipment and storage medium
CN111898353A (en) Table display method, device and medium
CN114449171B (en) Method for controlling camera, terminal device, storage medium and program product
CN114489429B (en) Terminal equipment, long screen capturing method and storage medium
CN114371895B (en) Terminal equipment, mail marking method and storage medium
CN113157092B (en) Visualization method, terminal device and storage medium
CN111158563A (en) Electronic terminal and picture correction method
CN112560612A (en) System, method, computer device and storage medium for determining business algorithm
CN112101297A (en) Training data set determination method, behavior analysis method, device, system and medium
CN113835582B (en) Terminal equipment, information display method and storage medium
CN110852717A (en) Travel updating method and intelligent communication terminal
CN112929858B (en) Method and terminal for simulating access control card
CN111142648B (en) Data processing method and intelligent terminal
CN113655948B (en) Control method of terminal equipment and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210108

RJ01 Rejection of invention patent application after publication