CN109298825B - Image classification method and device and electronic terminal - Google Patents

Image classification method and device and electronic terminal Download PDF

Info

Publication number
CN109298825B
CN109298825B CN201811162759.4A CN201811162759A CN109298825B CN 109298825 B CN109298825 B CN 109298825B CN 201811162759 A CN201811162759 A CN 201811162759A CN 109298825 B CN109298825 B CN 109298825B
Authority
CN
China
Prior art keywords
naming
inspection
button
area
identification point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811162759.4A
Other languages
Chinese (zh)
Other versions
CN109298825A (en
Inventor
许国伟
苏奕辉
蚁克特
林昌松
王健宏
郑建荣
曾晓彦
邱跃鸿
刘晓枫
蓝天
张泽翼
关健
郑国恺
陈梓荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Shantou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangdong Power Grid Co Ltd
Shantou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Power Grid Co Ltd, Shantou Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangdong Power Grid Co Ltd
Priority to CN202011080269.7A priority Critical patent/CN112214152B/en
Priority to CN201811162759.4A priority patent/CN109298825B/en
Publication of CN109298825A publication Critical patent/CN109298825A/en
Application granted granted Critical
Publication of CN109298825B publication Critical patent/CN109298825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides an image classification method, an image classification device and an electronic terminal, wherein the method comprises the following steps: displaying a multi-dimensional space model in a first area, wherein the multi-dimensional space model is provided with at least one identification point, and the identification point is associated with a patrol photo in a patrol photo set; responding to the selected operation in the first area, and displaying a naming conversation frame, wherein a naming area and a naming button are arranged in the naming conversation frame, and the naming button comprises a confirmation button; and when the confirmation button is triggered, naming the routing inspection photo corresponding to the selected identification point according to the content in the naming area. By the method, a large number of inspection photos can be rapidly classified and named, so that time is saved, and inspection quality is improved.

Description

Image classification method and device and electronic terminal
Technical Field
The invention relates to the field of power inspection, in particular to an image classification method and device and an electronic terminal.
Background
With the improvement of the requirements on the operation and maintenance quality and the management level of the power transmission line, the fine acceptance and inspection becomes an important content of the power transmission line inspection. Because the number of target parts needing to be subjected to fine acceptance and inspection of the power transmission line is large, and the shooting angle is also required to be met, the number of pictures is extremely large, and a large number of pictures are difficult to classify and sort.
The traditional mode is that manual work confirms the photo one by one, and the renaming is followed and is classifyed, and this kind of mode is inefficient, is difficult to arrange in order a large amount of photos in the short time, not only can consume the time of fortune examining personnel, still can indirectly cause angle and photo that the fortune examined personnel shot to be less and less, patrols and examines the quality and reduces thereupon.
Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide an image classification method, an image classification device and an electronic terminal.
In a first aspect, an embodiment of the present invention provides an image classification method, where the method includes:
displaying a multi-dimensional space model in a first area, wherein the multi-dimensional space model is provided with at least one identification point, and the identification point is associated with a patrol photo in a patrol photo set;
responding to the selected operation in the first area, and displaying a naming conversation frame, wherein a naming area and a naming button are arranged in the naming conversation frame, and the naming button comprises a confirmation button;
and when the confirmation button is triggered, naming the routing inspection photo corresponding to the selected identification point according to the content in the naming area.
In a second aspect, an embodiment of the present invention provides an image classification apparatus, including:
the system comprises a first display module, a second display module and a third display module, wherein the first display module is used for displaying a multi-dimensional space model in a first area, at least one identification point is arranged on the multi-dimensional space model, and the identification point is associated with an inspection photo in an inspection photo set;
the second display module is used for responding to the selected operation in the first area and displaying a naming conversation frame, wherein a naming area and a naming button are arranged in the naming conversation frame, and the naming button comprises a confirmation button;
and the naming module is used for naming the routing inspection photo corresponding to the selected identification point according to the content in the naming area when the confirmation button is triggered.
In a third aspect, an embodiment of the present invention provides an electronic terminal, including a memory and a processor;
the memory is used for storing a program that enables the processor configured for executing the program stored in the memory to execute the method provided by the first aspect.
In a fourth aspect, the present invention provides a readable storage medium, on which a computer program is stored, where the computer program runs the steps in the method provided in the first aspect when being executed by a processor.
Compared with the prior art, the image classification method, the image classification device and the electronic terminal can associate the identification points on the multidimensional space model with the inspection photos, and only part of the identification points on the model are selected to name the inspection photos. When the identification point is selected and a confirmation button in the naming conversation frame is triggered, the polling photos can be automatically named according to the content in the naming conversation frame. The position of the identification point on the multidimensional space model can reflect the shooting position of the inspection picture, and the shooting position can provide a basis for classification of the inspection picture. By the method, a large number of inspection photos can be rapidly classified and named, so that time is saved, and inspection quality is improved.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic terminal according to an embodiment of the present invention.
FIG. 2 is a flowchart of an image classification method according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a display interface provided in an embodiment of the present invention.
Fig. 4 is a schematic diagram of a named session box according to an embodiment of the present invention.
Fig. 5 is a schematic distribution diagram of sub-regions according to an embodiment of the present invention.
Fig. 6 is a schematic position diagram of the inspection photo according to the embodiment of the present invention.
Fig. 7 is a schematic diagram of functional modules of an image classification apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Because the fine acceptance and inspection of the power transmission line are extremely numerous, the same target component may need to be shot from a plurality of shooting positions, and the image quantity is further large. If the later arrangement time of the inspection photos is too long, the shooting angle of the operation and inspection personnel can be reduced for reducing the arrangement time, and then the operation and inspection personnel slowly turn to shoot according to the fixed number and the fixed angle, so that the line inspection is very unfavorable, and the inspection quality can be reduced. The existing photo automatic naming software mainly specifies a photographing sequence and then names the photo automatic naming software according to the photographing sequence. But because the general topography in mountain area at shaft tower place is complicated, and the shaft tower structure is not only in addition, and it is different to shoot effective visual angle, shoots according to the regulation order and is difficult to realize, wastes time and energy, and on the other hand can let the fortune examine personnel be unwilling to the rephotography under the condition of lacking the bat, still can restrict the freedom that the fortune examined personnel carefully shot.
First embodiment
Referring to fig. 1, a block diagram of an electronic terminal 100 according to an embodiment of the invention is shown. The electronic terminal 100 includes an image categorization apparatus 110, a memory 120, a storage controller 130, a processor 140, an input-output unit 150, a display unit 160, and the like. The memory 120, the memory controller 130, the processor 140, the input/output unit 150, and the display unit 160 are electrically connected to each other directly or indirectly to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The image classifying means 110 includes at least one software function module which can be stored in the memory 120 in the form of software or firmware (firmware) or is fixed in an Operating System (OS) of the electronic terminal 100. The processor 140 is used to execute executable modules stored in the memory 120, such as software functional modules or computer programs included in the image classification apparatus 110.
The Memory 120 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 120 is used for storing a program, and the processor 140 executes the program after receiving an execution instruction. Access to memory 120 by processor 140, and possibly other components, may be under the control of memory controller 130. The method executed by the electronic terminal 100 according to the process definition disclosed in any embodiment of the present invention can be applied to the processor 140, or implemented by the processor 140.
The processor 140 may be an integrated circuit chip having signal processing capabilities. The Processor 140 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor 140 may be any conventional processor or the like.
The input-output unit 150 is used to provide input data to the user. The input and output unit 150 may be, but is not limited to, a mouse, a keyboard, and the like.
The display unit 160 is used to display the image data for user reference. In this embodiment, the display unit 160 may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. Supporting single-point and multi-point touch operations means that the touch display can sense touch operations from one or more locations on the touch display at the same time, and the sensed touch operations are sent to the processor 140 for calculation and processing.
It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic terminal 100. For example, the electronic terminal 100 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1. In the embodiment of the present invention, the electronic terminal 100 may be a device capable of connecting to a network and having an operation processing capability, such as a server, a personal computer, or a mobile device.
Second embodiment
Please refer to fig. 2, which is a flowchart illustrating an image classification method applied to the electronic terminal shown in fig. 1 according to an embodiment of the present invention.
In this embodiment, before classifying the image, the tower data and the inspection photo set in the inspection line need to be imported in advance. For example, data may be imported in the form of text, pictures, tables, models, and the like. The inspection photos are intensively stored with a large number of inspection photos, a user can find a target pole tower according to a set voltage level and a set power transmission line through the imported pole tower data, and the inspection photos related to the target pole tower can be classified and renamed through the method of the embodiment. Wherein, the target shaft tower is the shaft tower that needs to patrol and examine and accept. In one case, the imported tower data may be presented in the form of a map, and the user may determine the target tower through the tower identifier on the map.
The specific process shown in fig. 2 will be described in detail below.
Step S210: and displaying a multi-dimensional space model in the first area, wherein the multi-dimensional space model is provided with at least one identification point, and the identification point is associated with the inspection photos in the inspection photo set. The multidimensional space model is established based on the position of the tower, the structure of the tower, the position of a shooting point of the inspection photo and other factors. The position relation between the shooting point of the inspection picture and the tower can be displayed through the multi-dimensional space model, wherein the position of the shooting point of the inspection picture can be displayed on the multi-dimensional space model in the form of the identification point, and when a user rotates the model, the position relation between the shooting point of the inspection picture and the tower can be clearly known.
Step S220: and responding to the selected operation in the first area, and displaying a naming conversation frame, wherein a naming area and a naming button are arranged in the naming conversation frame, and the naming button comprises a confirmation button.
Step S230: and when the confirmation button is triggered, naming the routing inspection photo corresponding to the selected identification point according to the content in the naming area.
In one implementation mode, the inspection photos can be collected to the towers according to the position information of the inspection photos, so that the inspection photos of the same tower can be collected together, and a plurality of inspection photos matched with the target tower can be obtained. The position information can be information of a shooting point, and comprises longitude, latitude and height information, the shooting point of the inspection photo can be collected to the structural characteristics of the tower through the position information, and the tower which is shot can be judged through the position information of the shooting point. For example, whether the distance from the position of the shooting point to the X coordinate of a certain tower is smaller than or equal to the radius r can be judged, if yes, the tower X is shot by the shooting point corresponding to the inspection picture; and the position information of the inspection photo can be used for searching the tower coordinate closest to the position of the shot point so as to determine which tower is shot. After the pole tower X that shoots is confirmed, can also further confirm according to the positional information who patrols and examines the photo that what this pole tower X was shot is partial, include: ground line, upper phase, middle phase and lower phase.
When the user selects the target tower, the multidimensional space model can be displayed in the first area of the display interface, and a plurality of identification points on the multidimensional space model are respectively associated with a plurality of inspection photos in the inspection photo set. The identification points on the same multidimensional space model can be regarded as a plurality of inspection photos corresponding to the same target tower, and classification and naming of the inspection photos can be synchronously realized by classifying and naming the identification points.
When these identification points are selected, step S220 is performed. In one case, a named session box is displayed when one or more identified points on the multidimensional space model are elected. In another case, the condition for displaying the named session box may be to directly select or click on the tour inspection photo. The naming conversation frame can be fixedly displayed in the display interface, and can also pop up when an identification point is selected or a photo is patrolled and examined. The user can name through the naming conversation box, and the naming mode is divided into default naming and custom naming.
In this embodiment, as shown in fig. 3, the display interface includes a first area and a second area, the first area may be used for assisting classification, and the second area may be used for displaying the classification result. In other embodiments, there may be more regions to achieve the same or similar effect.
In one example, as shown in fig. 4, a naming area and a naming button are provided in the naming session box, and the naming button includes a confirmation button, and when the confirmation button is triggered, the content in the naming area is acquired to name the selected identification point or the selected patrol photo. The content in the named area may be input by the user in a self-defined manner, or may be filled in by default (the content in the named area in fig. 4 is filled in by default according to the imported data), and when the confirmation button is triggered, the content in the named area is used as a new name of the inspection photo corresponding to the selected identification point.
By the method, the routing inspection photos can be classified and named only by selecting the identification points on the multi-dimensional space model and triggering the confirmation buttons of the naming conversation boxes. When the number of the photos is large, the method can rapidly classify and name the polling photos without classifying and renaming one by one after confirming the polling photos one by one, is favorable for improving the polling efficiency of the power transmission line under the scene of fine acceptance and polling, and can be well popularized in the polling work of the power transmission line.
On the other hand, the method can associate the position information of the inspection photo with the coordinate position of the tower, and can further associate with the structure of the tower. The position of the identification point on the multi-dimensional space model can reflect the position of the shooting point relative to the target tower. Compared with the method only integrating to the tower, the method of the embodiment can distinguish different shooting components and different phase sequences of the same tower, even for a multi-loop tower, the positions of the shooting points can be distinguished through the distribution of the identification points on the multi-dimensional space model, and the condition that inspection and acceptance cannot be performed due to the fact that different lines are integrated together can be avoided.
In this embodiment, as shown in fig. 4, the naming button in the naming session box includes a self-naming button for default naming/auto-naming in addition to the confirm button. Regarding the step of displaying the named session box in the above step S220, steps S221 to S222 may be included.
Step S221: and identifying according to the selected position of the identification point on the multidimensional space model, and determining an inspection object corresponding to the identification point, wherein the inspection object comprises a ground wire and/or a phase wire.
By identifying the position of the selected identification point on the multidimensional space model, the inspection object can be determined, and the inspection object can be used for representing the structure or the component of the target tower which is accepted, for example, the inspection object can be a ground wire or a phase line, and the phase sequence of the phase line comprises an upper phase, a middle phase and a lower phase.
Step S222: and adding the routing inspection object into the self-naming button. In fig. 4, "a phase", "B phase", and "C phase" are used to indicate an upper phase, a middle phase, and a lower phase, respectively. The names of the routing inspection objects in the self-naming button can be changed, namely, the ground line, the phase A, the phase B and the phase C can be changed into other names.
Corresponding to step S221 and step S222, the naming method may also be: and when the self-naming button is triggered, naming the routing inspection photo corresponding to the selected identification point according to the routing inspection object in the self-naming button.
In one embodiment, aiming at a plurality of identification points for selecting a plurality of routing inspection objects, the routing inspection objects identified by the self-naming button can be directly and respectively added to the corresponding identification points according to the identification results for naming.
In another embodiment, the identified tour inspection objects in the self-naming button may be added to the named area first to enable the user to preview the named contents, and after the confirmation button is triggered, the naming is performed according to the contents in the named area.
The inspection object can be automatically identified by setting the self-naming button, when the self-naming button is triggered, the identified inspection object can be directly added into the name of the inspection photo, so that default naming/automatic naming can be achieved, and if a user needs to name by self, only the name of the inspection object in the self-naming button or the name in a naming area needs to be changed. It should be noted that, no matter the default naming or the user-defined naming, the naming of at least one inspection photo can be realized, the classified naming operation can be rapidly performed, and compared with a mode of naming and classifying one by one, the time can be saved and the efficiency can be improved.
In this embodiment, in order to identify the identification point, the first area in the display interface of fig. 3 is divided into a plurality of sub-areas. In an implementation manner, the first region includes 8 sub-regions, as shown in fig. 5, which is a schematic distribution diagram of the sub-regions provided in the embodiment of the present invention.
The step S221 may be implemented as follows: and identifying the selected sub-region where the identification point is located, and determining the routing inspection object corresponding to the identification point according to the identified sub-region, wherein each sub-region corresponds to one routing inspection object.
Corresponding to the distribution schematic diagram of the sub-regions shown in fig. 5, the routing inspection objects corresponding to the sub-regions 1 to 4 are the ground line, the upper phase, the middle phase and the lower phase of one line of the tower X, and the routing inspection objects corresponding to the sub-regions 5 to 8 are the ground line, the upper phase, the middle phase and the lower phase of the other line of the tower X. The identification points corresponding to the inspection pictures are displayed on the multi-dimensional space model according to the position information of the shooting points, the distribution situation of the identification points on the multi-dimensional space model can be seen, if the distribution situation of the identification points meets the subregion distribution schematic diagram shown in figure 5, the strain tower X which is vertically arranged in the same tower double-circuit can be known. And further identifying the sub-regions on which the identification points are distributed, determining the inspection object corresponding to the identification points, and naming the identification points according to the identified inspection object after identifying the inspection object, so as to realize the classification and naming of the inspection photos.
If the identification points on the multidimensional space model are only distributed on the left or the right in fig. 5, a single loop can be represented, and if the identification points are arranged on the left and the right, a double loop can be represented. It should be noted that, in other embodiments, the first area may be further divided into more or fewer sub-areas, and a person skilled in the art may set a corresponding dividing manner according to the structure of the actual tower, so that when the identification points are distributed on the multi-dimensional space model, the structure of the target tower may be determined according to the positions of the identification points, and the inspection object, that is, the shot part of the target tower, is further determined.
By the method, the inspection object can be obtained by identifying the sub-region where the identification point is located, and the inspection photos corresponding to the identification point are classified and named under the condition that the inspection object is determined. The condition that different circuits are collected together when the multi-loop tower is inspected and accepted can be avoided, the routing inspection photos can be visually distinguished through the distribution condition of the identification points on the multi-dimensional space model, and the identification and the realization are facilitated.
In one example, the user only needs to select a plurality of identification points in the sub-region 1, and then clicks the self-naming button in the displayed naming conversation frame, so as to complete the classification and naming of the identification points. Wherein, the self-naming button is displayed with a 'ground line' word; the user can also add a 'ground line' word in the displayed naming conversation frame, and click the confirmation button to finish the classification and naming of the identification points. By the method, the classification and naming of a large number of inspection pictures can be completed in a short time, the visualization of the result of the shooting point of the target tower is realized, and the inspection and acceptance efficiency is improved.
In this embodiment, in order to display the naming result, the method further includes step S240.
Step S240: and displaying a naming result corresponding to the inspection photo in a second area, wherein the second area is used for displaying an inspection photo catalog corresponding to the identification point in the first area in a list form.
And displaying the naming result in a second area of the display interface. Whether the classified naming of all the inspection photos is finished can be confirmed by checking the inspection photo catalog in the second area.
In one embodiment, the sorted and named patrol photos and the unclassified and named patrol photos in the second area are arranged at different positions, and may be stored in two columns, for example, one column is used for storing the sorted patrol photos, and the other column is used for storing the unclassified patrol photos.
In another embodiment, in the second area, the inspection photos after the classified naming operation and the inspection photos without classified naming are respectively marked by different colors, so that whether the classified naming of all the inspection photos is finished or not can be confirmed.
As another embodiment, the current names of all the patrol photos are displayed in the second area, but in the multidimensional space model, the classified and named identification points and the unclassified and named identification points are in different display forms, for example, different colors are used for distinguishing.
The above embodiments may be combined according to actual circumstances.
In this embodiment, in order to display the inspection photos in the process of classification and naming, the method further includes step S250.
Step S250: and responding to the query instruction of the identification point, and displaying the routing inspection photo corresponding to the identification point.
Wherein, patrol and examine the photo and can directly show on multi-dimensional space model, when choosing the mark point after, can directly demonstrate the content of patrolling and examining the photo on multi-dimensional space model to this can carry out the detail to some special photos of patrolling and examining and confirm. When a certain part of the tower needs to be focused, for example, when a line fault occurs on the part of the tower, the special routing inspection photos can be confirmed and classified one by one.
The inspection photo can also be displayed at a fixed position on the display interface, as shown in fig. 6, which is a schematic display position diagram of the inspection photo, and after the identification point is selected, the inspection photo corresponding to the identification point can be displayed at the fixed position (in fig. 6, "photo"). It should be noted that the position of the inspection photo in the display interface may be changed, for example, the inspection photo may be dragged to another position, so as not to block the operation interface.
The tour photograph may also be displayed in the second area, for example, in the form of a thumbnail, and when the thumbnail is triggered, an enlarged view of the tour photograph is displayed.
In this embodiment, in order to set a single photo, corresponding to the step S250, the method further includes a step S260.
Step S260: and in response to the naming operation of any polling photo, naming the polling photo. In one embodiment, when a click operation on a mark point in a multidimensional space model is received, a patrol inspection photo corresponding to the mark point is displayed, and the patrol inspection photo is named at the same time, wherein the naming mode can be default naming and custom naming.
In this embodiment, in order to perform detail acceptance on a single photo, after step S250, the method further includes: responding to the frame selection operation in any routing inspection photo, generating a screenshot according to the image in the frame selection area, and displaying a text box after the screenshot is generated, wherein the text box is used for receiving the name input by the user, and the content in the text box can be used as the name of the screenshot. The screenshot can be used for further showing the detail content in the inspection photo.
In this embodiment, before step S210, the method further includes step S201 to step S202.
Step S201: and acquiring the position information of the inspection photos in the inspection photo set.
Step S201: a multi-dimensional spatial model is generated within the first region from the location information.
Wherein the location information may be derived from drone flight data. The unmanned aerial vehicle flight data comprises unmanned aerial vehicle image data and POS data (specific contents can be known by technical personnel in the field), the unmanned aerial vehicle image data is an image shot by using shooting equipment carried by the unmanned aerial vehicle, and the POS data records three-dimensional coordinates (longitude, latitude, flying height) and flying postures (course angle, pitch angle and roll angle) of the unmanned aerial vehicle at the moment of shooting. Table 1 below is an example of POS data.
TABLE 1 POS data example
Figure BDA0001820377250000121
The POS data are stored in a storage device, a multi-dimensional space model can be generated according to the POS data, the multi-dimensional space model is displayed in a first area of a display interface, inspection objects can be determined according to positions, on the multi-dimensional space model, of inspection photos corresponding to the POS data, and classification and naming are carried out according to the inspection objects.
Before importing the POS data, technicians in the field can select polling items according to actual needs, establish polling tasks and select polling lines.
In a complete example, two new fine routing inspection items with the voltage level of 220KV and the line name of 110KV can be established, a routing inspection photo set is led into the items, the routing inspection photo set is matched and bound with the items, and the routing inspection photos in the routing inspection photo set are further classified and named.
When a line of '110 KV two new II lines' is selected, a map is displayed in a display interface, the line and a plurality of tower identifications are displayed on the map, when a target tower of 'two new II lines N26' is selected, a multidimensional space model is displayed on the display interface, when an identification point on the multidimensional space model is selected, the default display content of a named area in a named session frame is '110 KV two new II lines N26', a ground wire, an A phase, a B phase and a C phase are displayed on a naming button, when the naming button is clicked, the name of an inspection photo corresponding to the identification point is changed into '110 KV two new II lines-two new II lines N26-ground wire', '110 KV two new II lines-N26-A phase', and the like, and accordingly classification naming of the inspection photos is achieved. When the identification point in the multidimensional space model is clicked, the patrol inspection photo can be displayed, and further the detail confirmation is carried out on the displayed patrol inspection photo, for example, the patrol inspection photo is subjected to screenshot operation to generate a new drawing and is named. Wherein, the name of all the inspection photos can be displayed on the second area of the display interface. All the selection methods in the above methods may be point selection or frame selection.
By the method, the inspection requirements can be met, even if the operation and inspection personnel do not shoot according to the specified sequence when shooting on site, the arrangement of the inspection pictures can be finished in a short time even if the pictures are not classified and named on the shooting site (the field naming can seriously affect the outdoor inspection work efficiency, and when one base tower or one tower is finely inspected, the arrangement of the inspection pictures can be finished in a short time compared with the mode of directly shooting, which can consume several times of batteries and operation time and is not beneficial to the effective operation of the actual inspection work). When the method is applied to arrangement of large-scale images, the processing efficiency can be obviously improved, the inspection efficiency is improved, great convenience is brought to fine acceptance and inspection work, and the method has good application value.
Third embodiment
Please refer to fig. 4, which is a schematic diagram illustrating functional modules of the image classifying device 110 shown in fig. 1 according to an embodiment of the present invention. The image classification device 110 includes a first display module 111, a second display module 112, a naming module 113, an obtaining module 114, and a generating module 115.
The first display module 111 is configured to display a multi-dimensional space model in a first region, where the multi-dimensional space model has at least one identification point, and the identification point is associated with an inspection photo in an inspection photo set.
And a second display module 112, configured to display a naming conversation frame in response to the selected operation in the first area, where the naming conversation frame is provided with a naming area and naming buttons, where the naming buttons include a confirmation button.
And a naming module 113, configured to name the inspection photo corresponding to the selected identifier according to the content in the named area when the confirmation button is triggered.
In this embodiment, the second display module 112 further includes an identification module, and the identification module is configured to identify the position of the selected identifier point on the multidimensional space model, and determine the inspection object corresponding to the identifier point, where the inspection object includes a ground line and/or a phase line, and the second display module 112 is further configured to display a self-naming button in the naming session frame, and display the identified inspection object in the self-naming button. The naming module 113 is further configured to name the inspection photo corresponding to the selected identifier point according to the inspection object in the self-naming button when the self-naming button is triggered.
The identification module is further configured to identify a sub-region where the selected identification point is located, and determine an inspection object corresponding to the identification point according to the identified sub-region, where each sub-region corresponds to one inspection object.
In this embodiment, the first display module 111 is further configured to display a naming result corresponding to the inspection photo in a second area, where the second area is configured to display an inspection photo directory corresponding to the identifier in the first area in a list form.
The second display module 112 is further configured to display the inspection photos corresponding to the identification points in response to the query instruction for the identification points, and the naming module 113 is further configured to name any inspection photo in response to a naming operation for the inspection photo.
The obtaining module 114 is configured to obtain position information of the inspection photos in the inspection photo set, and the generating module 115 is configured to generate a multi-dimensional space model in the first area according to the position information.
For further details of the image classification apparatus 110 in this embodiment, reference may be made to the related description of the method in the foregoing embodiment, which is not repeated herein.
By the image classification device 110, the image classification method in the embodiment can be executed, the method has a good application value for processing large-scale inspection photos, the arrangement time of the inspection photos can be saved, and classification and naming of the inspection photos can be completed only by operating the identification points on the multi-dimensional space model.
In conclusion, by the image classification method, the image classification device and the electronic terminal, the time for arranging the inspection photos can be saved, the application value of fine acceptance and inspection work is high, the work of arranging and classifying the photos can be finished in a short time, the acquisition sequence, the angle and the number of the inspection photos are not limited, and the inspection personnel can carry out inspection shooting according to actual conditions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A method of classifying an image, the method comprising:
displaying a multi-dimensional space model in a first area, wherein the multi-dimensional space model is provided with at least one identification point, the identification point is associated with the inspection photos in the inspection photo set, and a plurality of identification points on the same multi-dimensional space model represent a plurality of inspection photos corresponding to the same target tower;
responding to the selected operation in the first area, identifying according to the position of the selected identification point on the multidimensional space model, determining an inspection object corresponding to the identification point, adding the inspection object to a self-naming button of a naming conversation frame for displaying, wherein the inspection object comprises a ground wire and/or a phase wire, and the phase sequence of the phase wire comprises an upper phase, a middle phase and a lower phase; a naming region and a naming button are arranged in the naming conversation frame, and the naming button comprises a confirmation button and a self-naming button;
when the confirmation button is triggered, naming the routing inspection photo corresponding to the selected identification point according to the content in the named area;
and when the self-naming button is triggered, naming the routing inspection photo corresponding to the selected identification point according to the routing inspection object in the self-naming button.
2. The image classification method according to claim 1, characterized in that the first region comprises a plurality of sub-regions, and the identifying according to the selected position of the identification point on the multidimensional space model comprises:
identifying the selected sub-area where the identification point is located;
the determining of the routing inspection object corresponding to the identification point comprises:
and determining the routing inspection objects corresponding to the identification points according to the identified sub-regions, wherein each sub-region corresponds to one routing inspection object.
3. The image categorization method of claim 1, further comprising:
and displaying a naming result corresponding to the inspection photo in a second area, wherein the second area is used for displaying an inspection photo catalog corresponding to the identification point in the first area in a list form.
4. The image categorization method of claim 1, further comprising:
and responding to the query instruction of the identification point, and displaying the routing inspection photo corresponding to the identification point.
5. The image categorization method of claim 4, further comprising:
and in response to the naming operation of any polling photo, naming the polling photo.
6. The image categorization method of any of claims 1 to 5, wherein, prior to the step of displaying the multi-dimensional spatial model in the first region, the method further comprises:
acquiring position information of the inspection photos in the inspection photo set;
a multi-dimensional spatial model is generated within the first region from the location information.
7. An image classification apparatus, characterized in that the apparatus comprises:
the system comprises a first display module, a second display module and a third display module, wherein the first display module is used for displaying a multi-dimensional space model in a first area, at least one identification point is arranged on the multi-dimensional space model, the identification point is associated with an inspection photo in an inspection photo set, and a plurality of identification points on the same multi-dimensional space model represent a plurality of inspection photos corresponding to the same target tower;
the second display module is used for responding to the selected operation in the first area and displaying a naming conversation frame, wherein a naming area and a naming button are arranged in the naming conversation frame, and the naming button comprises a confirmation button;
the naming module is used for naming the routing inspection photo corresponding to the selected identification point according to the content in the naming area when the confirmation button is triggered;
the second display module comprises an identification module, the identification module is used for identifying according to the position of the selected identification point on the multidimensional space model and determining an inspection object corresponding to the identification point, the inspection object comprises a ground wire and/or a phase wire, and the phase sequence of the phase wire comprises an upper phase, a middle phase and a lower phase;
the second display module is also used for displaying a self-naming button in the naming conversation frame, wherein the self-naming button comprises an identified routing inspection object;
and the naming module is also used for naming the routing inspection photos corresponding to the selected identification points according to the routing inspection objects in the self-naming button when the self-naming button is triggered.
8. The image categorization apparatus of claim 7, further comprising:
the acquisition module is used for acquiring the position information of the inspection photos in the inspection photo set;
and the generating module is used for generating a multi-dimensional space model in the first area according to the position information.
9. An electronic terminal, comprising:
a memory;
a processor;
the memory for storing a program that enables a processor configured to execute the program stored in the memory to perform the method of any one of claims 1-6.
CN201811162759.4A 2018-09-30 2018-09-30 Image classification method and device and electronic terminal Active CN109298825B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011080269.7A CN112214152B (en) 2018-09-30 2018-09-30 Image classification method and device and electronic terminal
CN201811162759.4A CN109298825B (en) 2018-09-30 2018-09-30 Image classification method and device and electronic terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811162759.4A CN109298825B (en) 2018-09-30 2018-09-30 Image classification method and device and electronic terminal

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202011080269.7A Division CN112214152B (en) 2018-09-30 2018-09-30 Image classification method and device and electronic terminal

Publications (2)

Publication Number Publication Date
CN109298825A CN109298825A (en) 2019-02-01
CN109298825B true CN109298825B (en) 2020-11-06

Family

ID=65161576

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201811162759.4A Active CN109298825B (en) 2018-09-30 2018-09-30 Image classification method and device and electronic terminal
CN202011080269.7A Active CN112214152B (en) 2018-09-30 2018-09-30 Image classification method and device and electronic terminal

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011080269.7A Active CN112214152B (en) 2018-09-30 2018-09-30 Image classification method and device and electronic terminal

Country Status (1)

Country Link
CN (2) CN109298825B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109960736A (en) * 2019-02-19 2019-07-02 广州中科云图智能科技有限公司 Defect analysis method, system, storage medium and equipment based on unmanned plane
CN111018236B (en) * 2019-11-25 2022-07-01 成都碧水水务建设工程有限公司 Method and system for generating sewage treatment inspection scheme
CN111309945B (en) * 2020-01-19 2022-03-01 国网山东省电力公司青岛供电公司 Method and system for accurately classifying inspection pictures of unmanned aerial vehicle
CN112114731A (en) * 2020-09-09 2020-12-22 广东电网有限责任公司 Transformer substation inspection equipment
CN112883219B (en) * 2021-01-12 2023-06-13 吴发献 Unmanned aerial vehicle inspection photo naming method based on spatial position
CN113867406B (en) * 2021-11-10 2024-07-12 广东电网能源发展有限公司 Unmanned aerial vehicle-based line inspection method, unmanned aerial vehicle-based line inspection system, intelligent equipment and storage medium
CN114237466B (en) * 2021-12-15 2023-06-30 文思海辉智科科技有限公司 Inspection point configuration method and device
CN115713555B (en) * 2022-11-09 2023-12-08 中国南方电网有限责任公司超高压输电公司昆明局 Image acquisition equipment installation position determining method and device and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105141889A (en) * 2015-07-28 2015-12-09 国家电网公司 Power transmission line intelligent patrol system based on image vision
CN105513155A (en) * 2015-12-01 2016-04-20 中国联合网络通信集团有限公司 Inspection picture classifying and naming method and terminal equipment
CN107145580A (en) * 2017-05-09 2017-09-08 广东电网有限责任公司揭阳供电局 A kind of method of destination object image management, apparatus and system
CN108307112A (en) * 2018-01-25 2018-07-20 国家电网公司 Electricity transmitting and converting construction digital photograph harvester
CN108346176A (en) * 2018-02-08 2018-07-31 河南送变电建设有限公司 The method of transmission line of electricity three-dimensional modeling

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4711577B2 (en) * 2001-09-28 2011-06-29 株式会社パスコ Map information display program
US7787672B2 (en) * 2004-11-04 2010-08-31 Dr Systems, Inc. Systems and methods for matching, naming, and displaying medical images
CN102193965A (en) * 2010-03-03 2011-09-21 宏达国际电子股份有限公司 Method, device and system for browsing spot information
US10845982B2 (en) * 2014-04-28 2020-11-24 Facebook, Inc. Providing intelligent transcriptions of sound messages in a messaging application
US11061540B2 (en) * 2015-01-21 2021-07-13 Logmein, Inc. Remote support service with smart whiteboard
CN106776916A (en) * 2016-12-01 2017-05-31 广东容祺智能科技有限公司 A kind of line walking picture names filing and report output system automatically
CN107168370A (en) * 2017-06-16 2017-09-15 广东电网有限责任公司佛山供电局 The fine intelligent inspection system of transmission line of electricity multi-rotor unmanned aerial vehicle and its method
CN207409918U (en) * 2017-06-30 2018-05-25 国网上海市电力公司 A kind of intelligent patrol detection device of transmission line of electricity
CN107316353B (en) * 2017-07-03 2019-09-24 国网冀北电力有限公司承德供电公司 A kind of unmanned plane inspection approaches to IM, system and server
CN108038673A (en) * 2017-12-29 2018-05-15 国网上海市电力公司 A kind of archive management system of electric power overhaul engineering project
CN108133522B (en) * 2017-12-29 2024-05-07 北京神州泰岳软件股份有限公司 Pipe gallery inspection method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105141889A (en) * 2015-07-28 2015-12-09 国家电网公司 Power transmission line intelligent patrol system based on image vision
CN105513155A (en) * 2015-12-01 2016-04-20 中国联合网络通信集团有限公司 Inspection picture classifying and naming method and terminal equipment
CN107145580A (en) * 2017-05-09 2017-09-08 广东电网有限责任公司揭阳供电局 A kind of method of destination object image management, apparatus and system
CN108307112A (en) * 2018-01-25 2018-07-20 国家电网公司 Electricity transmitting and converting construction digital photograph harvester
CN108346176A (en) * 2018-02-08 2018-07-31 河南送变电建设有限公司 The method of transmission line of electricity three-dimensional modeling

Also Published As

Publication number Publication date
CN112214152A (en) 2021-01-12
CN112214152B (en) 2022-01-18
CN109298825A (en) 2019-02-01

Similar Documents

Publication Publication Date Title
CN109298825B (en) Image classification method and device and electronic terminal
US9535930B2 (en) System and method for using an image to provide search results
CN108230113A (en) User's portrait generation method, device, equipment and readable storage medium storing program for executing
CN103942218A (en) Method and device for generating and updating special subject pages
JP2019520662A (en) Content-based search and retrieval of trademark images
CN103955543A (en) Multimode-based clothing image retrieval method
WO2012164685A1 (en) Information providing device, information providing method, information providing processing program, recording medium recording information providing processing program, and information providing system
US20170287041A1 (en) Information processing apparatus, information processing method, and information processing program
CN103345692B (en) Data handling system, server unit and data processing method
CN110059641B (en) Depth bird recognition algorithm based on multiple preset points
DE112019001226T5 (en) Image search apparatus, image search method, electronic device and control method therefor
CN113901647A (en) Part process rule compiling method and device, storage medium and electronic equipment
US20150178314A1 (en) System and method for using an image to provide search results
WO2019230499A1 (en) Image retrieval apparatus image retrieval method, product catalog generation system, and recording medium
JP2013182524A (en) Image processing apparatus and image processing method
CN116682130A (en) Method, device and equipment for extracting icon information and readable storage medium
JP6314071B2 (en) Information processing apparatus, information processing method, and program
CN110308848B (en) Label interaction method and device and computer storage medium
CN117054846A (en) Visual test method, system and device for chip and storage medium
CN111047731A (en) AR technology-based telecommunication room inspection method and system
CN108170838B (en) Topic evolution visualization display method, application server and computer readable storage medium
CN104850608A (en) Method for searching keywords on information exhibiting page
CN113920424A (en) Method and device for extracting visual objects of power transformation inspection robot
JP5467829B2 (en) Aerial photo search system
CN113873080B (en) Multimedia file acquisition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant