CN113361498B - Remote judgment and repair method and system for smart city front-end fault equipment - Google Patents

Remote judgment and repair method and system for smart city front-end fault equipment Download PDF

Info

Publication number
CN113361498B
CN113361498B CN202110905756.0A CN202110905756A CN113361498B CN 113361498 B CN113361498 B CN 113361498B CN 202110905756 A CN202110905756 A CN 202110905756A CN 113361498 B CN113361498 B CN 113361498B
Authority
CN
China
Prior art keywords
picture
server
end equipment
appointed
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110905756.0A
Other languages
Chinese (zh)
Other versions
CN113361498A (en
Inventor
王霞
宋凯
丁军祥
陈志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingwang Technology Co ltd
Original Assignee
Jingwang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingwang Technology Co ltd filed Critical Jingwang Technology Co ltd
Priority to CN202110905756.0A priority Critical patent/CN113361498B/en
Publication of CN113361498A publication Critical patent/CN113361498A/en
Application granted granted Critical
Publication of CN113361498B publication Critical patent/CN113361498B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Abstract

The application discloses a remote judgment and repair method and system for front-end fault equipment of a smart city, which comprises the following steps: acquiring picture data, and inputting the picture data into a front-end equipment prediction model for processing to obtain a prediction result; acquiring an acquisition position and acquisition time, and acquiring appointed front-end equipment; acquiring appointed operation content; if the picture data are not matched with the specified operation contents, the server sends the specified AR data to the AR helmet terminal; the method comprises the steps that an AR helmet terminal collects a first image in front of a wearer of the AR helmet terminal; if the first image is an external touch screen of the appointed front-end equipment, the AR helmet terminal displays an AR interface; a specified equipment terminal senses a sensing signal set; if the touch operation corresponding to the sensing signal set is a complete repairing process, the shell is opened so as to carry out repairing operation, and remote judgment and repair of the front-end fault equipment are realized.

Description

Remote judgment and repair method and system for smart city front-end fault equipment
Technical Field
The application relates to the field of computers, in particular to a method and a system for remotely judging and repairing front-end fault equipment of a smart city.
Background
The mode that the traditional scheme judges whether the front-end equipment of the smart city breaks down is generally determined by judging whether communication between a server and an equipment terminal for controlling the front-end equipment is smooth. In some fault scenarios, communication between the server and the device terminal is smooth, but the operation performed by the front-end device is not consistent with the operation that should be performed, so that the conventional front-end device fault judgment scheme cannot be adopted to identify the faulty device. Moreover, after the fault is confirmed, the repair of the front-end fault equipment depends on a worker with a certain maintenance level, so that the timeliness of the repair is difficult to guarantee.
Disclosure of Invention
The application provides a remote judgment and repair method for front-end fault equipment of a smart city, which comprises the following steps:
s1, the server acquires picture data from at least one city network by adopting a preset data crawling technology, inputs the picture data into a preset front-end equipment prediction model for processing to obtain a prediction result output by the front-end equipment prediction model, and judges whether the prediction result is a front-end equipment picture or not; the prediction result comprises a front-end equipment picture or a non-front-end equipment picture; the front-end equipment prediction model is formed by training based on a preset deep convolutional neural network model in a supervised learning mode;
s2, if the prediction result is a front-end equipment picture, the server acquires the acquisition position and the acquisition time of the picture data, and acquires the appointed front-end equipment corresponding to the acquisition position according to a first corresponding table of the position and the front-end equipment; acquiring appointed operation content corresponding to the acquisition time according to a second corresponding table of time and appointed front-end equipment operation content; judging whether the picture data is matched with the specified operation content;
s3, if the picture data are not matched with the designated operation content, the server extracts designated AR data corresponding to the designated front-end equipment from a preset database and sends the designated AR data to a preset AR helmet terminal;
s4, the AR helmet terminal collects a first image in front of a wearer of the AR helmet terminal by adopting a preset image sensor, and judges whether the first image is an external touch screen of the appointed front-end equipment; the external touch screen is preset with a touch sensor, but does not display images;
s5, if the first image is the external touch screen of the appointed front-end equipment, displaying an AR interface by the AR helmet terminal so that the AR interface is superposed on the external touch screen; the AR interface is formed by all modules of the appointed front-end equipment and the mutual connection relation among all the modules;
s6, sensing a sensing signal set of the touch operation of the wearer by the appointed equipment terminal through the external touch screen, and judging whether the touch operation corresponding to the sensing signal set is a complete repairing process; when the wearer carries out simulation repair operation in the AR space, the wearer should touch the external touch screen firstly; the appointed equipment terminal is used for controlling the appointed front-end equipment;
and S7, if the touch operation corresponding to the sensing signal set is a complete repairing process, the appointed equipment terminal controls the appointed front-end equipment to open the shell so that the wearer can perform repairing operation.
Further, before step S1, the step of acquiring, by the server, picture data from at least one urban network by using a preset data crawling technology, inputting the picture data into a preset front-end device prediction model for processing, so as to obtain a prediction result output by the front-end device prediction model, and determining whether the prediction result is a front-end device picture, includes:
s001, a server acquires a preset number of collected sample pictures, and manually marks the sample pictures to mark front-end equipment in the sample pictures so as to obtain marked pictures;
s002, dividing the marked pictures into pictures for training and pictures for verification by the server according to a preset proportion;
s003, the server calls a preset deep convolutional neural network model, and inputs the picture for training into the deep convolutional neural network model for training to obtain a primary model;
s004, the server adopts the picture for verification to verify the primary model so as to obtain a verification result;
s005, the server judges whether the verification result is that the verification is passed;
and S006, if the verification result is that the verification is passed, the server marks the preliminary model as a front-end equipment prediction model.
Further, the step S2 of determining whether the picture data matches the specified operation content includes:
s201, a server acquires a default picture which is shot in advance when the appointed front-end equipment does not execute operation;
s202, the server modifies the default picture according to the designated operation content to obtain a standard operation picture;
s203, the server carries out region interception processing on the picture data to obtain a region picture; wherein the specified front-end device is located in the region picture;
s204, the server carries out similarity calculation processing on the standard operation picture and the region picture according to a preset similarity calculation method to obtain a similarity value;
s205, the server judges whether the similarity value is larger than a preset similarity threshold value;
and S206, if the similarity value is larger than a preset similarity threshold, the server judges that the picture data is matched with the specified operation content.
Further, step S5, where the if the first image is the external touch screen of the specified front-end device, the AR helmet terminal displays an AR interface, so that the AR interface is superposed on the external touch screen, includes:
s501, if the first image is the external touch screen of the appointed front-end device, displaying an AR interface with operation guide by the AR helmet terminal, so that the AR interface is superposed on the external touch screen, and indicating the wearer to perform a simulation repairing process according to the operation guide.
Further, after step S7, if the touch operation corresponding to the sensing signal set is a complete repair process, the method includes:
s71, after the wearer finishes the repair operation, a preset owner of the inspection terminal carries out manual detection processing on the appointed front-end equipment, and the manual detection processing result is sent to the server through the inspection terminal; the inspection terminal is a mobile terminal, and the manual detection processing result at least comprises all modules of the repaired specified equipment terminal and the interconnection relationship among all the modules;
s72, the server acquires the sensing signal set sent by the appointed equipment terminal;
s73, judging whether the sensing signal set is matched with the manual detection processing result by the server;
and S74, if the sensing signal set is matched with the manual detection processing result, the server judges that the designated front-end equipment is successfully repaired.
The application discloses remote judgement and repair system of wisdom city front end fault equipment includes:
the prediction result acquisition unit is used for indicating the server to adopt a preset data crawling technology to acquire picture data from at least one city network, inputting the picture data into a preset front-end equipment prediction model for processing to obtain a prediction result output by the front-end equipment prediction model, and judging whether the prediction result is a front-end equipment picture or not; the prediction result comprises a front-end equipment picture or a non-front-end equipment picture; the front-end equipment prediction model is formed by training based on a preset deep convolutional neural network model in a supervised learning mode;
the appointed front-end equipment obtaining unit is used for indicating that if the prediction result is a front-end equipment picture, the server obtains the acquisition position and the acquisition time of the picture data, and obtains appointed front-end equipment corresponding to the acquisition position according to a first correspondence table of the position and the front-end equipment; acquiring appointed operation content corresponding to the acquisition time according to a second corresponding table of time and appointed front-end equipment operation content; judging whether the picture data is matched with the specified operation content;
a designated AR data sending unit, configured to instruct, if the picture data does not match the designated operation content, the server to extract designated AR data corresponding to the designated front-end device from a preset database, and send the designated AR data to a preset AR helmet terminal;
the first image acquisition unit is used for indicating the AR helmet terminal to adopt a preset image sensor, acquiring a first image in front of a wearer of the AR helmet terminal, and judging whether the first image is an external touch screen of the appointed front-end equipment; the external touch screen is preset with a touch sensor, but does not display images;
the AR interface display unit is used for indicating that if the first image is the external touch screen of the appointed front-end device, the AR helmet terminal displays an AR interface so that the AR interface is superposed on the external touch screen; the AR interface is formed by all modules of the appointed front-end equipment and the mutual connection relation among all the modules;
the sensing signal set sensing unit is used for indicating a specified device terminal to sense a sensing signal set of the touch operation of the wearer through the external touch screen and judging whether the touch operation corresponding to the sensing signal set is a complete repairing process; when the wearer carries out simulation repair operation in the AR space, the wearer should touch the external touch screen firstly; the appointed equipment terminal is used for controlling the appointed front-end equipment;
and the repairing operation unit is used for indicating that if the touch operation corresponding to the sensing signal set is a complete repairing process, the appointed equipment terminal controls the appointed front-end equipment to open the shell so that the wearer can perform repairing operation.
According to the method and the system for remotely judging and repairing the front-end fault equipment of the smart city, the picture data are obtained and input into the preset front-end equipment prediction model for processing, so that the prediction result output by the front-end equipment prediction model is obtained; acquiring the acquisition position and acquisition time of the picture data, and acquiring appointed front-end equipment; acquiring appointed operation content; if the picture data are not matched with the specified operation contents, the server sends the specified AR data to a preset AR helmet terminal; the method comprises the steps that an AR helmet terminal collects a first image in front of a wearer of the AR helmet terminal by adopting a preset image sensor; if the first image is the external touch screen of the appointed front-end device, the AR helmet terminal displays an AR interface; the appointed equipment terminal senses a sensing signal set of the touch operation of the wearer through the external touch screen; and if the touch operation corresponding to the sensing signal set is a complete repairing process, the shell is opened to carry out repairing operation, so that remote judgment and repair of the front-end fault equipment are realized.
The remote fault judgment does not depend on the inherent data transmission between the server and the equipment terminal, which is actually equivalent to adding a new fault judgment mode, and the applicability of the remote fault judgment is improved. And a special repair auxiliary scheme is adopted (namely, the used AR display scheme is a split scheme which is different from the traditional AR display scheme only involving a single AR terminal), so that corresponding repair operation can be carried out without requiring deep repair workers, simulation repair is required before formal repair operation, and a sensing signal set reflecting the simulation repair process is stored in the equipment terminal to be used in subsequent verification (and also have the effect of preventing malicious damage to front-end equipment).
Wherein, at the remote judgement of smart city front end trouble equipment and repair in-process, relevant data transmission does not take place between server and the equipment terminal, and this is a characteristics of this application, and this also because the fault reason probably includes communication channel trouble, consequently avoids causing judgement and restoration because unreliable communication. Moreover, the server needs to perform auxiliary repair by means of the AR helmet terminal, because no relevant data transmission occurs between the server and the device terminal.
Drawings
Fig. 1 is a schematic flowchart of a smart city front-end failure device remote determination and repair method according to an embodiment of the present application;
fig. 2 is a schematic block diagram of a smart city front-end failure device remote judgment and repair system according to an embodiment of the present application;
fig. 3 is an AR helmet used in the smart city front-end failure device remote determination and repair method according to an embodiment of the present application;
the implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Example 1:
referring to fig. 1, the present embodiment provides a method for remotely determining and repairing a smart city front-end failure device, which includes the following steps:
s1, the server acquires picture data from at least one city network by adopting a preset data crawling technology, inputs the picture data into a preset front-end equipment prediction model for processing to obtain a prediction result output by the front-end equipment prediction model, and judges whether the prediction result is a front-end equipment picture or not; the prediction result comprises a front-end equipment picture or a non-front-end equipment picture; the front-end equipment prediction model is formed by training based on a preset deep convolutional neural network model in a supervised learning mode;
s2, if the prediction result is a front-end equipment picture, the server acquires the acquisition position and the acquisition time of the picture data, and acquires the appointed front-end equipment corresponding to the acquisition position according to a first corresponding table of the position and the front-end equipment; acquiring appointed operation content corresponding to the acquisition time according to a second corresponding table of time and appointed front-end equipment operation content; judging whether the picture data is matched with the specified operation content;
s3, if the picture data are not matched with the designated operation content, the server extracts designated AR data corresponding to the designated front-end equipment from a preset database and sends the designated AR data to a preset AR helmet terminal;
s4, the AR helmet terminal collects a first image in front of a wearer of the AR helmet terminal by adopting a preset image sensor, and judges whether the first image is an external touch screen of the appointed front-end equipment; the external touch screen is preset with a touch sensor, but does not display images;
s5, if the first image is the external touch screen of the appointed front-end equipment, displaying an AR interface by the AR helmet terminal so that the AR interface is superposed on the external touch screen; the AR interface is formed by all modules of the appointed front-end equipment and the mutual connection relation among all the modules;
s6, sensing a sensing signal set of the touch operation of the wearer by the appointed equipment terminal through the external touch screen, and judging whether the touch operation corresponding to the sensing signal set is a complete repairing process; when the wearer carries out simulation repair operation in the AR space, the wearer should touch the external touch screen firstly; the appointed equipment terminal is used for controlling the appointed front-end equipment;
and S7, if the touch operation corresponding to the sensing signal set is a complete repairing process, the appointed equipment terminal controls the appointed front-end equipment to open the shell so that the wearer can perform repairing operation.
Steps S1-S3 are performed by the server, i.e., the server is the executing agent. In the present application, different steps may be executed by different subjects, and a specific subject is a subject of the step.
The method comprises the steps that a server acquires picture data from at least one urban network by adopting a preset data crawling technology, inputs the picture data into a preset front-end equipment prediction model for processing to obtain a prediction result output by the front-end equipment prediction model, and judges whether the prediction result is a front-end equipment picture or not; the prediction result comprises a front-end equipment picture or a non-front-end equipment picture; the front-end equipment prediction model is formed by training based on a preset deep convolutional neural network model in a supervised learning mode; if the prediction result is a front-end equipment picture, the server acquires the acquisition position and the acquisition time of the picture data, and acquires the appointed front-end equipment corresponding to the acquisition position according to a first correspondence table of the position and the front-end equipment; acquiring appointed operation content corresponding to the acquisition time according to a second corresponding table of time and appointed front-end equipment operation content; judging whether the picture data is matched with the specified operation content; if the picture data are not matched with the specified operation contents, the server extracts specified AR data corresponding to the specified front-end equipment from a preset database and sends the specified AR data to a preset AR helmet terminal.
The AR helmet of the present application may be in any feasible form, such as shown in fig. 3, which includes an AR image display module (not shown, which may be presented based on optical waveguides, etc.), an image sensor, and the like.
The data crawling technique may be any feasible technique, such as a big data crawler technique, and is not described herein again. And the city network refers to an information network related to the smart city, including but not limited to a mobile internet, an internet of things, an internet of vehicles, a wired information transmission network built in the city, and the like. The server acquires picture data from at least one city network by adopting a preset data crawling technology, and actually refers to a picture obtained by the server through an image sensor of non-front-end equipment. In addition, the image data obtained at this time includes the shooting location and the shooting time, which will be used in the subsequent steps. And then, judging whether a certain front-end device is included in the picture data by adopting a front-end device prediction model so as to determine whether the picture data has the need of being adopted. The picture data may be a scene photo taken by the mobile terminal, and the viewing range of the picture data is just about to include a certain front-end device (the front-end device is, for example, a street lamp, a traffic light, etc.); of course, the picture taken by the vehicle terminal through the preset camera can also be taken.
If the prediction result is the front-end equipment picture, the picture data is related to the front-end equipment, and the running state of the front-end equipment can be determined by means of the picture data. First, the acquisition position and acquisition time of the picture data need to be acquired to specifically determine the front-end device in the picture data, that is, the designated front-end device corresponding to the acquisition position is acquired according to the first mapping table of the position and the front-end device (the designated front-end device at this time may be represented by an inherent number, for example, in a smart city, an inherent number is set for each front-end device in advance). Secondly, determining the operation content which should be performed by the appointed front-end equipment in the acquisition time, namely acquiring the appointed operation content corresponding to the acquisition time. And finally, judging whether the picture data is matched with the specified operation content or not so as to measure the running state of the specified front-end equipment.
If the picture data is not matched with the specified operation content, the specified front-end equipment is judged to be in fault, and therefore whether the front-end equipment is in fault or not can be judged remotely. It should be noted that, in the process of determining whether the front-end device fails, the data connection between the server and the device terminal corresponding to the front-end device is not relied on, so that the timeliness is longer, and the short-term stress is relieved for the server. In addition, if the picture data is matched with the designated operation content, the server only uses the picture data as a basis to temporarily judge that the designated front-end equipment is normal, but the running state of the designated front-end equipment can still be measured by adopting a traditional remote judgment scheme, and at this moment, the adopted technical scheme is the existing technical scheme and is not the key point of the application, so that the details are not repeated.
In order to facilitate fault repair, the server extracts the designated AR data corresponding to the designated front-end device from a preset database and sends the designated AR data to a preset AR helmet terminal. This is to enable the AR helmet terminal to display the AR image during the repair process (to be precise, at least the accurate process including the repair), so as to facilitate the repair of the fault. The AR helmet terminal can also be an AR glasses terminal, is a wearable terminal capable of displaying AR images, and is correspondingly arranged in the AR helmet. The AR helmet terminal can be held by any person, and the existing modes can be three, wherein the first mode is preset in a fixed place, for example, the AR helmet terminal is placed near the specified front-end equipment; the second is owned by citizens, and as the AR equipment is lighter and the portability is emphasized, the citizens can wear the AR equipment to go out; the third is held by maintenance personnel. The front-end equipment is repaired by a wearer of the AR helmet, the wearer is correspondingly different relative to the three existing modes, but when the wearer is a citizen, the method is suitable for simple repair when the front-end equipment fails (the front-end equipment at the moment is composed of a plurality of modules with high integration level, such as an energy supply module, a display module, a communication module, an instruction execution module and corresponding connecting lines, and can be replaced by a plug-in mode, so that when some modules fail, only new modules are replaced by the plug-in mode, and common citizens in the repair scheme can perform the repair under the assistance of the AR helmet); when the wearer is a maintenance person, the maintenance person can easily perform the repair task without experience enrichment.
Further, before step S1, the step of acquiring, by the server, picture data from at least one urban network by using a preset data crawling technology, inputting the picture data into a preset front-end device prediction model for processing, so as to obtain a prediction result output by the front-end device prediction model, and determining whether the prediction result is a front-end device picture, includes:
s001, a server acquires a preset number of collected sample pictures, and manually marks the sample pictures to mark front-end equipment in the sample pictures so as to obtain marked pictures;
s002, dividing the marked pictures into pictures for training and pictures for verification by the server according to a preset proportion;
s003, the server calls a preset deep convolutional neural network model, and inputs the picture for training into the deep convolutional neural network model for training to obtain a primary model;
s004, the server adopts the picture for verification to verify the primary model so as to obtain a verification result;
s005, the server judges whether the verification result is that the verification is passed;
and S006, if the verification result is that the verification is passed, the server marks the preliminary model as a front-end equipment prediction model.
The deep convolutional neural network model is a machine learning model suitable for classifying or predicting images, and comprises an input layer, a convolutional layer, a pooling layer, a full-link layer, an output layer and the like. When the sample picture is manually marked, the main portion of the sample picture is generally not the front-end device itself, and therefore the front-end device is generally marked at the edge portion of the sample picture. In addition, not all sample pictures include the front-end device, so if some sample pictures do not include the front-end device, the front-end device does not need to be marked, and the training mode adopted at this time is actually a full-supervised learning mode. Further, the step of obtaining a pre-collected specified number of sample pictures and manually marking the sample pictures to mark front-end equipment in the sample pictures to obtain marked pictures can be further detailed as obtaining the pre-collected specified number of sample pictures and manually marking the sample pictures to mark the front-end equipment in the sample pictures and deleting the sample pictures which are not marked out of the front-end equipment to obtain marked pictures, wherein the training mode adopted at this time is actually a semi-supervised learning mode. The preset ratio is, for example, 9: 1, or 8: 2. and then training and verifying, wherein the training can be carried out in any feasible way, for example, a back propagation method is adopted to propagate network parameters of each layer so as to utilize the training. If the verification result is that the verification is passed, the preliminary model can be qualified for prediction work, and therefore the server marks the preliminary model as a front-end equipment prediction model.
Further, the step S2 of determining whether the picture data matches the specified operation content includes:
s201, a server acquires a default picture which is shot in advance when the appointed front-end equipment does not execute operation;
s202, the server modifies the default picture according to the designated operation content to obtain a standard operation picture;
s203, the server carries out region interception processing on the picture data to obtain a region picture; wherein the specified front-end device is located in the region picture;
s204, the server carries out similarity calculation processing on the standard operation picture and the region picture according to a preset similarity calculation method to obtain a similarity value;
s205, the server judges whether the similarity value is larger than a preset similarity threshold value;
and S206, if the similarity value is larger than a preset similarity threshold, the server judges that the picture data is matched with the specified operation content.
Thereby determining whether the picture data matches the specified operation content. The appointed front-end equipment is a traffic light for example, and the default picture is in a state of no light; if the specified operation content is the red light lighting, the color of the red light position in the default picture is changed into red; since the main part of the picture data is not generally a front-end device, a region clipping process is performed to exclude unnecessary interference data. And then determining whether the picture in the region is also in a red light display state, namely judging whether the picture data is matched with the specified operation content. This is embodied in a specific calculation process, that is, according to a preset similarity calculation method, similarity calculation processing is performed on the standard operation picture and the region picture to obtain a similarity value. The similarity calculation processing can be realized in any feasible manner, for example, contour extraction is performed, color values of different contour regions are determined, color value differences of corresponding contours in two pictures are sequentially compared, and a similarity value is obtained according to the color value differences. Of course, other similarity calculation methods may also be adopted, and are not described herein again.
The execution terminals of steps S4 and S5 are AR helmet terminals. The AR helmet terminal adopts a preset image sensor, collects a first image in front of a wearer of the AR helmet terminal, and judges whether the first image is an external touch screen of the specified front-end device; the external touch screen is preset with a touch sensor, but does not display images;
s5, if the first image is the external touch screen of the appointed front-end equipment, displaying an AR interface by the AR helmet terminal so that the AR interface is superposed on the external touch screen; the AR interface is formed by all modules of the appointed front-end equipment and the mutual connection relation among all the modules.
The AR display process of the application is different from the traditional AR image display process, and the AR display process is characterized by comprising the following steps:
in the AR display process of this application, need AR helmet terminal with the external touch-sensitive screen cooperation of appointed front end equipment just can realize, and only the wearing person at AR helmet terminal can see the AR image, but the operating data that needs is but by touch sensor on the external touch-sensitive screen gathers. By means of the special design, the device has the characteristic of strong confidentiality (except for a wearer, an AR image cannot be known, so that the structure of a specified front-end device cannot be obtained), more importantly, virtual operation data of the wearer can be stored, the virtual operation data are stored in a specified device terminal, the virtual operation data can be used for judging whether a repairing operation actually performed by the wearer is in a standard state or not and is the same as an expected operation state or not, and the possibility of device damage, particularly malicious damage, is reduced.
Moreover, since the data of the simulation operation is stored in the device terminal, the subsequent inspector can know whether the repair is in compliance or not and whether the repair is complete (by judging the post-repair state of the front-end device and the data of the simulation operation) without actually passing through the server.
The cooperative operation of the AR helmet and the designated front-end device is firstly embodied in that the AR helmet needs to determine that the designated front-end device is actually in front of the external touch screen, and the operation is realized by an image sensor and image recognition. And after the fact that the data is correct is determined, continuing to display an AR interface, wherein the AR interface is supported by the AR data transmitted by the server, so that the AR interface is matched with the appointed front-end equipment, namely the AR interface is formed by all modules of the appointed front-end equipment and the mutual connection relation among all the modules. Whereas the designated front-end equipment of the present application is constructed of modules with a high degree of integration, which facilitates maintenance. In particular, the simplest way of maintenance is to replace a module only when it fails.
Further, step S5, where the if the first image is the external touch screen of the specified front-end device, the AR helmet terminal displays an AR interface, so that the AR interface is superposed on the external touch screen, includes:
s501, if the first image is the external touch screen of the appointed front-end device, displaying an AR interface with operation guide by the AR helmet terminal, so that the AR interface is superposed on the external touch screen, and indicating the wearer to perform a simulation repairing process according to the operation guide.
Thereby reducing the requirements on repair personnel and even the ordinary citizens or primary maintenance personnel can be competent for the repair work. The operation guidance can be realized in any way, for example, a virtual short head is used for instructing the wearer to perform corresponding operation. Because the appointed front-end equipment in the application adopts the module with the integration level, the actual operation process is simple, and the AR interface is favorably adopted for guiding and completing.
The execution subjects of steps S6-S7 are designated device terminals. The appointed equipment terminal senses a sensing signal set of the touch operation of the wearer through the external touch screen and judges whether the touch operation corresponding to the sensing signal set is a complete repairing process; when the wearer carries out simulation repair operation in the AR space, the wearer should touch the external touch screen firstly; the appointed equipment terminal is used for controlling the appointed front-end equipment; and if the touch operation corresponding to the sensing signal set is a complete repairing process, the appointed equipment terminal controls the appointed front-end equipment to open the shell so that the wearer can perform repairing operation.
In the application, although the designated front-end device fails, the designated front-end device does not fail to work, and the failure is represented by, for example, an instruction execution error, but both signal sensing of the external touch screen and opening control of the shell of the designated front-end device by the designated device terminal can be realized. Furthermore, the designated device terminal can be divided into a first sub-terminal and a second sub-terminal (isolation processing can be performed between the two sub-terminals), and the first sub-terminal and the second sub-terminal are respectively used for controlling the designated front-end device and obtaining the sensing signal of the external touch screen and starting the designated front-end device, so that the external touch screen and the shell can be started smoothly as long as the external touch screen is not physically damaged regardless of the type of the fault.
In addition, because the wearer should touch the external touch screen first when performing the simulated repair operation in the AR space, the operation process of the wearer is stored in a touch signal manner when performing the simulated repair operation, thereby forming a sensing signal set. In which the process of repairing operations is simulated, for example, by dragging different modules to perform operations of module replacement and connection mode modification. And when the touch operation corresponding to the sensing signal set is determined to be a complete repairing process, the appointed equipment terminal controls the appointed front-end equipment to open the shell so that the wearer can carry out repairing operation.
Further, after step S7, if the touch operation corresponding to the sensing signal set is a complete repair process, the method includes:
s71, after the wearer finishes the repair operation, a preset owner of the inspection terminal carries out manual detection processing on the appointed front-end equipment, and the manual detection processing result is sent to the server through the inspection terminal; the inspection terminal is a mobile terminal, and the manual detection processing result at least comprises all modules of the repaired specified equipment terminal and the interconnection relationship among all the modules;
s72, the server acquires the sensing signal set sent by the appointed equipment terminal;
s73, judging whether the sensing signal set is matched with the manual detection processing result by the server;
and S74, if the sensing signal set is matched with the manual detection processing result, the server judges that the designated front-end equipment is successfully repaired.
It should be noted that although the designated device terminal stores the sensing signal set, and the sensing signal set reflects the process of the simulation operation, the designated device terminal itself cannot know the correct operation process, so that an inspector is required to determine the effect of the repair. Therefore, the holder of the inspection terminal performs manual inspection processing on the specified front-end device, and sends the manual inspection processing result to the server through the inspection terminal. And the server acquires the sensing signal set again, judges whether the simulation operation corresponding to the sensing signal set is the same as the actual operation or not, namely whether the sensing signal set is matched with the manual detection processing result or not, and if the sensing signal set is matched with the manual detection processing result, the actual repairing operation is the same as the simulation repairing operation, so that possible malicious damage behaviors do not exist, and judges that the specified front-end equipment is successfully repaired.
It should be noted that the present application is also capable of performing local verification processing without going through a server. Namely, the designated device terminal obtains a manual detection processing result sent by the inspection terminal, and then judges whether the manual detection processing result is matched with the sensing signal set. The present application is limited to the execution by the server in the above steps, and the purpose of the present application is to not only determine whether the repair is successful, but also enable the server to update the state of the specified front-end device.
According to the remote judgment and repair method for the front-end fault equipment of the smart city, picture data are obtained and input into a preset front-end equipment prediction model for processing, so that a prediction result output by the front-end equipment prediction model is obtained; acquiring the acquisition position and acquisition time of the picture data, and acquiring appointed front-end equipment; acquiring appointed operation content; if the picture data are not matched with the specified operation contents, the server sends the specified AR data to a preset AR helmet terminal; the method comprises the steps that an AR helmet terminal collects a first image in front of a wearer of the AR helmet terminal by adopting a preset image sensor; if the first image is the external touch screen of the appointed front-end device, the AR helmet terminal displays an AR interface; the appointed equipment terminal senses a sensing signal set of the touch operation of the wearer through the external touch screen; and if the touch operation corresponding to the sensing signal set is a complete repairing process, the shell is opened to carry out repairing operation, so that remote judgment and repair of the front-end fault equipment are realized.
Example 2:
referring to fig. 2, this embodiment provides a smart city front-end failure device remote judging and repairing system for the remote judging and repairing method described in embodiment 1, which includes:
a prediction result obtaining unit 10, configured to instruct a server to obtain picture data from at least one urban network by using a preset data crawling technique, input the picture data into a preset front-end device prediction model, and process the picture data to obtain a prediction result output by the front-end device prediction model, and determine whether the prediction result is a front-end device picture; the prediction result comprises a front-end equipment picture or a non-front-end equipment picture; the front-end equipment prediction model is formed by training based on a preset deep convolutional neural network model in a supervised learning mode;
a designated front-end device obtaining unit 20, configured to instruct, if the prediction result is a front-end device picture, the server to obtain a collection position and collection time of the picture data, and obtain, according to a first correspondence table of a position and a front-end device, a designated front-end device corresponding to the collection position; acquiring appointed operation content corresponding to the acquisition time according to a second corresponding table of time and appointed front-end equipment operation content; judging whether the picture data is matched with the specified operation content;
a designated AR data sending unit 30, configured to instruct, if the picture data does not match the designated operation content, the server to extract designated AR data corresponding to the designated front-end device from a preset database, and send the designated AR data to a preset AR helmet terminal;
the first image acquisition unit 40 is configured to instruct the AR helmet terminal to adopt a preset image sensor, acquire a first image in front of a wearer of the AR helmet terminal, and determine whether the first image is an external touch screen of the specified front-end device; the external touch screen is preset with a touch sensor, but does not display images;
an AR interface display unit 50, configured to indicate that, if the first image is an external touch screen of the specified front-end device, the AR helmet terminal displays an AR interface, so that the AR interface is overlapped with the external touch screen; the AR interface is formed by all modules of the appointed front-end equipment and the mutual connection relation among all the modules;
a sensing signal set sensing unit 60, configured to instruct a designated device terminal to sense a sensing signal set of a touch operation performed by the wearer through the external touch screen, and determine whether the touch operation corresponding to the sensing signal set is a complete repair process; when the wearer carries out simulation repair operation in the AR space, the wearer should touch the external touch screen firstly; the appointed equipment terminal is used for controlling the appointed front-end equipment;
and a repairing operation unit 70, configured to instruct, if the touch operation corresponding to the sensing signal set is a complete repairing process, the designated device terminal to control the designated front-end device to open the shell, so that the wearer can perform a repairing operation.
Example 3:
the difference between this embodiment and embodiment 2 is that the smart city front-end failure device remote determination and repair system in this embodiment further includes:
the system comprises an artificial marking unit, a processing unit and a processing unit, wherein the artificial marking unit is used for indicating a server to obtain a preset number of collected sample pictures and carrying out artificial marking on the sample pictures so as to mark front-end equipment in the sample pictures, and thus, marked pictures are obtained;
the picture dividing unit is used for indicating the server to divide the marked picture into a picture for training and a picture for verification according to a preset proportion;
the model training unit is used for indicating the server to call a preset deep convolutional neural network model and inputting the pictures for training into the deep convolutional neural network model for training to obtain a primary model;
the model verification unit is used for indicating the server to adopt the verification picture to verify the preliminary model so as to obtain a verification result;
the verification result judging unit is used for indicating the server to judge whether the verification result is passed;
and the model marking unit is used for indicating that if the verification result is that the verification is passed, the server marks the preliminary model as a front-end equipment prediction model.
In one embodiment, the designated front-end device obtaining unit includes:
a default picture acquiring subunit, configured to instruct a server to acquire a default picture obtained by pre-shooting the specified front-end device when no operation is performed;
the standard operation picture acquisition subunit is used for indicating the server to modify the default picture according to the specified operation content so as to obtain a standard operation picture;
the regional picture intercepting subunit is used for indicating the server to carry out regional intercepting processing on the picture data so as to obtain a regional picture; wherein the specified front-end device is located in the region picture;
the similarity calculation subunit is used for indicating the server to perform similarity calculation processing on the standard operation picture and the region picture according to a preset similarity calculation method so as to obtain a similarity value;
the similarity value judging subunit is used for indicating the server to judge whether the similarity value is greater than a preset similarity threshold value;
and the matching judgment subunit is used for indicating that the server judges that the picture data is matched with the specified operation content if the similarity value is greater than a preset similarity threshold value.
In one embodiment, the AR interface presentation unit includes:
and the AR interface display subunit is used for indicating that if the first image is the external touch screen of the appointed front-end device, the AR helmet terminal displays an AR interface with operation guide so that the AR interface is superposed on the external touch screen, and indicating the wearer to perform a simulation repair process according to the operation guide.
Example 4:
the difference between this embodiment and embodiment 2 or 3 is only that the smart city front-end failure device remote determination and repair system in this embodiment further includes:
the manual detection result sending unit is used for indicating that a preset owner of the inspection terminal carries out manual detection processing on the appointed front-end equipment after the wearer finishes the repair operation, and sending a manual detection processing result to the server through the inspection terminal; the inspection terminal is a mobile terminal, and the manual detection processing result at least comprises all modules of the repaired specified equipment terminal and the interconnection relationship among all the modules;
a sensing signal set acquisition unit, configured to instruct a server to acquire the sensing signal set sent by the specified device terminal;
the sensing signal set judgment unit is used for indicating a server to judge whether the sensing signal set is matched with the manual detection processing result;
and the repair judging unit is used for indicating that if the sensing signal set is matched with the manual detection processing result, the server judges that the specified front-end equipment is successfully repaired.
The operation performed by each of the units or the sub-units corresponds to the steps of the method for remotely judging and repairing the front-end failure device of the smart city according to the foregoing embodiments, and details are not repeated herein.
According to the remote judgment and repair system for the front-end fault equipment of the smart city, picture data are obtained and input into a preset front-end equipment prediction model for processing, so that a prediction result output by the front-end equipment prediction model is obtained; acquiring the acquisition position and acquisition time of the picture data, and acquiring appointed front-end equipment; acquiring appointed operation content; if the picture data are not matched with the specified operation contents, the server sends the specified AR data to a preset AR helmet terminal; the method comprises the steps that an AR helmet terminal collects a first image in front of a wearer of the AR helmet terminal by adopting a preset image sensor; if the first image is the external touch screen of the appointed front-end device, the AR helmet terminal displays an AR interface; the appointed equipment terminal senses a sensing signal set of the touch operation of the wearer through the external touch screen; and if the touch operation corresponding to the sensing signal set is a complete repairing process, the shell is opened to carry out repairing operation, so that remote judgment and repair of the front-end fault equipment are realized.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with a computer program or instructions, the computer program can be stored in a non-volatile computer-readable storage medium, and the computer program can include the processes of the embodiments of the methods described above when executed. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, system, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, system, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, system, article, or method that includes the element.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A remote judgment and repair method for front-end fault equipment of a smart city is characterized by comprising the following steps:
s1, the server acquires picture data from at least one city network by adopting a preset data crawling technology, inputs the picture data into a preset front-end equipment prediction model for processing to obtain a prediction result output by the front-end equipment prediction model, and judges whether the prediction result is a front-end equipment picture or not; the prediction result comprises a front-end equipment picture or a non-front-end equipment picture; the front-end equipment prediction model is formed by training based on a preset deep convolutional neural network model in a supervised learning mode;
s2, if the prediction result is a front-end equipment picture, the server acquires the acquisition position and the acquisition time of the picture data, and acquires the appointed front-end equipment corresponding to the acquisition position according to a first corresponding table of the position and the front-end equipment; acquiring appointed operation content corresponding to the acquisition time according to a second corresponding table of time and appointed front-end equipment operation content; judging whether the picture data is matched with the specified operation content;
s3, if the picture data are not matched with the designated operation content, the server extracts designated AR data corresponding to the designated front-end equipment from a preset database and sends the designated AR data to a preset AR helmet terminal;
s4, the AR helmet terminal collects a first image in front of a wearer of the AR helmet terminal by adopting a preset image sensor, and judges whether the first image is an external touch screen of the appointed front-end equipment; the external touch screen is preset with a touch sensor, but does not display images;
s5, if the first image is the external touch screen of the appointed front-end equipment, displaying an AR interface by the AR helmet terminal so that the AR interface is superposed on the external touch screen; the AR interface is formed by all modules of the appointed front-end equipment and the mutual connection relation among all the modules;
s6, sensing a sensing signal set of the touch operation of the wearer by the appointed equipment terminal through the external touch screen, and judging whether the touch operation corresponding to the sensing signal set is a complete repairing process; when the wearer carries out simulation repair operation in the AR space, the wearer should touch the external touch screen firstly; the appointed equipment terminal is used for controlling the appointed front-end equipment;
and S7, if the touch operation corresponding to the sensing signal set is a complete repairing process, the appointed equipment terminal controls the appointed front-end equipment to open the shell so that the wearer can perform repairing operation.
2. The smart city front-end failure device remote judging and repairing method according to claim 1, wherein the server employs a preset data crawling technology to obtain picture data from at least one city network, inputs the picture data into a preset front-end device prediction model for processing, so as to obtain a prediction result output by the front-end device prediction model, and judges whether the prediction result is a front-end device picture or not, before step S1, the method comprises:
s001, a server acquires a preset number of collected sample pictures, and manually marks the sample pictures to mark front-end equipment in the sample pictures so as to obtain marked pictures;
s002, dividing the marked pictures into pictures for training and pictures for verification by the server according to a preset proportion;
s003, the server calls a preset deep convolutional neural network model, and inputs the picture for training into the deep convolutional neural network model for training to obtain a primary model;
s004, the server adopts the picture for verification to verify the primary model so as to obtain a verification result;
s005, the server judges whether the verification result is that the verification is passed;
and S006, if the verification result is that the verification is passed, the server marks the preliminary model as a front-end equipment prediction model.
3. The smart city front-end failure device remote judging and repairing method as claimed in claim 1, wherein the step S2 of judging whether the picture data matches the designated operation content includes:
s201, a server acquires a default picture which is shot in advance when the appointed front-end equipment does not execute operation;
s202, the server modifies the default picture according to the designated operation content to obtain a standard operation picture;
s203, the server carries out region interception processing on the picture data to obtain a region picture; wherein the specified front-end device is located in the region picture;
s204, the server carries out similarity calculation processing on the standard operation picture and the region picture according to a preset similarity calculation method to obtain a similarity value;
s205, the server judges whether the similarity value is larger than a preset similarity threshold value;
and S206, if the similarity value is larger than a preset similarity threshold, the server judges that the picture data is matched with the specified operation content.
4. The method for remotely judging and repairing the smart city front-end faulty device according to claim 1, wherein the step S5 of displaying an AR interface by the AR helmet terminal if the first image is the external touch screen of the specified front-end device, so that the AR interface is overlapped with the external touch screen comprises:
s501, if the first image is the external touch screen of the appointed front-end device, displaying an AR interface with operation guide by the AR helmet terminal, so that the AR interface is superposed on the external touch screen, and indicating the wearer to perform a simulation repairing process according to the operation guide.
5. The method for remotely judging and repairing smart city front-end failure equipment according to claim 1, wherein after step S7, in which if the touch operation corresponding to the sensing signal set is a complete repairing process, the method comprises:
s71, after the wearer finishes the repair operation, a preset owner of the inspection terminal carries out manual detection processing on the appointed front-end equipment, and the manual detection processing result is sent to the server through the inspection terminal; the inspection terminal is a mobile terminal, and the manual detection processing result at least comprises all modules of the repaired specified equipment terminal and the interconnection relationship among all the modules;
s72, the server acquires the sensing signal set sent by the appointed equipment terminal;
s73, judging whether the sensing signal set is matched with the manual detection processing result by the server;
and S74, if the sensing signal set is matched with the manual detection processing result, the server judges that the designated front-end equipment is successfully repaired.
6. The utility model provides a long-range judgement of smart city front end fault equipment and repair system which characterized in that includes:
the prediction result acquisition unit is used for indicating the server to adopt a preset data crawling technology to acquire picture data from at least one city network, inputting the picture data into a preset front-end equipment prediction model for processing to obtain a prediction result output by the front-end equipment prediction model, and judging whether the prediction result is a front-end equipment picture or not; the prediction result comprises a front-end equipment picture or a non-front-end equipment picture; the front-end equipment prediction model is formed by training based on a preset deep convolutional neural network model in a supervised learning mode;
the appointed front-end equipment obtaining unit is used for indicating that if the prediction result is a front-end equipment picture, the server obtains the acquisition position and the acquisition time of the picture data, and obtains appointed front-end equipment corresponding to the acquisition position according to a first correspondence table of the position and the front-end equipment; acquiring appointed operation content corresponding to the acquisition time according to a second corresponding table of time and appointed front-end equipment operation content; judging whether the picture data is matched with the specified operation content;
a designated AR data sending unit, configured to instruct, if the picture data does not match the designated operation content, the server to extract designated AR data corresponding to the designated front-end device from a preset database, and send the designated AR data to a preset AR helmet terminal;
the first image acquisition unit is used for indicating the AR helmet terminal to adopt a preset image sensor, acquiring a first image in front of a wearer of the AR helmet terminal, and judging whether the first image is an external touch screen of the appointed front-end equipment; the external touch screen is preset with a touch sensor, but does not display images;
the AR interface display unit is used for indicating that if the first image is the external touch screen of the appointed front-end device, the AR helmet terminal displays an AR interface so that the AR interface is superposed on the external touch screen; the AR interface is formed by all modules of the appointed front-end equipment and the mutual connection relation among all the modules;
the sensing signal set sensing unit is used for indicating a specified device terminal to sense a sensing signal set of the touch operation of the wearer through the external touch screen and judging whether the touch operation corresponding to the sensing signal set is a complete repairing process; when the wearer carries out simulation repair operation in the AR space, the wearer should touch the external touch screen firstly; the appointed equipment terminal is used for controlling the appointed front-end equipment;
and the repairing operation unit is used for indicating that if the touch operation corresponding to the sensing signal set is a complete repairing process, the appointed equipment terminal controls the appointed front-end equipment to open the shell so that the wearer can perform repairing operation.
7. The smart city front-end failure device remote judgment and repair system of claim 6, wherein the system comprises:
the system comprises an artificial marking unit, a processing unit and a processing unit, wherein the artificial marking unit is used for indicating a server to obtain a preset number of collected sample pictures and carrying out artificial marking on the sample pictures so as to mark front-end equipment in the sample pictures, and thus, marked pictures are obtained;
the picture dividing unit is used for indicating the server to divide the marked picture into a picture for training and a picture for verification according to a preset proportion;
the model training unit is used for indicating the server to call a preset deep convolutional neural network model and inputting the pictures for training into the deep convolutional neural network model for training to obtain a primary model;
the model verification unit is used for indicating the server to adopt the verification picture to verify the preliminary model so as to obtain a verification result;
the verification result judging unit is used for indicating the server to judge whether the verification result is passed;
and the model marking unit is used for indicating that if the verification result is that the verification is passed, the server marks the preliminary model as a front-end equipment prediction model.
8. The smart city front-end faulty device remote judgment and repair system according to claim 6, wherein the designated front-end device obtaining unit includes:
a default picture acquiring subunit, configured to instruct a server to acquire a default picture obtained by pre-shooting the specified front-end device when no operation is performed;
the standard operation picture acquisition subunit is used for indicating the server to modify the default picture according to the specified operation content so as to obtain a standard operation picture;
the regional picture intercepting subunit is used for indicating the server to carry out regional intercepting processing on the picture data so as to obtain a regional picture; wherein the specified front-end device is located in the region picture;
the similarity calculation subunit is used for indicating the server to perform similarity calculation processing on the standard operation picture and the region picture according to a preset similarity calculation method so as to obtain a similarity value;
the similarity value judging subunit is used for indicating the server to judge whether the similarity value is greater than a preset similarity threshold value;
and the matching judgment subunit is used for indicating that the server judges that the picture data is matched with the specified operation content if the similarity value is greater than a preset similarity threshold value.
9. The smart city front-end failure device remote judgment and repair system of claim 6, wherein the AR interface display unit comprises:
and the AR interface display subunit is used for indicating that if the first image is the external touch screen of the appointed front-end device, the AR helmet terminal displays an AR interface with operation guide so that the AR interface is superposed on the external touch screen, and indicating the wearer to perform a simulation repair process according to the operation guide.
10. The smart city front-end failure device remote judgment and repair system of claim 6, wherein the system comprises:
the manual detection result sending unit is used for indicating that a preset owner of the inspection terminal carries out manual detection processing on the appointed front-end equipment after the wearer finishes the repair operation, and sending a manual detection processing result to the server through the inspection terminal; the inspection terminal is a mobile terminal, and the manual detection processing result at least comprises all modules of the repaired specified equipment terminal and the interconnection relationship among all the modules;
a sensing signal set acquisition unit, configured to instruct a server to acquire the sensing signal set sent by the specified device terminal;
the sensing signal set judgment unit is used for indicating a server to judge whether the sensing signal set is matched with the manual detection processing result;
and the repair judging unit is used for indicating that if the sensing signal set is matched with the manual detection processing result, the server judges that the specified front-end equipment is successfully repaired.
CN202110905756.0A 2021-08-09 2021-08-09 Remote judgment and repair method and system for smart city front-end fault equipment Active CN113361498B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110905756.0A CN113361498B (en) 2021-08-09 2021-08-09 Remote judgment and repair method and system for smart city front-end fault equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110905756.0A CN113361498B (en) 2021-08-09 2021-08-09 Remote judgment and repair method and system for smart city front-end fault equipment

Publications (2)

Publication Number Publication Date
CN113361498A CN113361498A (en) 2021-09-07
CN113361498B true CN113361498B (en) 2021-11-09

Family

ID=77540707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110905756.0A Active CN113361498B (en) 2021-08-09 2021-08-09 Remote judgment and repair method and system for smart city front-end fault equipment

Country Status (1)

Country Link
CN (1) CN113361498B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106321072A (en) * 2015-06-15 2017-01-11 中国科学院沈阳自动化研究所 Method for pumping well fault diagnosis based on pump indicator diagram
CN109934123A (en) * 2019-02-23 2019-06-25 蒂姆维澳(上海)网络技术有限公司 Fault code identifying system and method based on OCR technique, internet and AR technology
CN111722714A (en) * 2020-06-17 2020-09-29 贵州电网有限责任公司 Digital substation metering operation inspection auxiliary method based on AR technology
CN112001709A (en) * 2020-09-04 2020-11-27 贵州电网有限责任公司 Intelligent assignment scheduling AR multi-person remote assistance method based on ant colony algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180284758A1 (en) * 2016-05-09 2018-10-04 StrongForce IoT Portfolio 2016, LLC Methods and systems for industrial internet of things data collection for equipment analysis in an upstream oil and gas environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106321072A (en) * 2015-06-15 2017-01-11 中国科学院沈阳自动化研究所 Method for pumping well fault diagnosis based on pump indicator diagram
CN109934123A (en) * 2019-02-23 2019-06-25 蒂姆维澳(上海)网络技术有限公司 Fault code identifying system and method based on OCR technique, internet and AR technology
CN111722714A (en) * 2020-06-17 2020-09-29 贵州电网有限责任公司 Digital substation metering operation inspection auxiliary method based on AR technology
CN112001709A (en) * 2020-09-04 2020-11-27 贵州电网有限责任公司 Intelligent assignment scheduling AR multi-person remote assistance method based on ant colony algorithm

Also Published As

Publication number Publication date
CN113361498A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
WO2021135499A1 (en) Damage detection model training and vehicle damage detection methods, device, apparatus, and medium
WO2021162195A1 (en) Deep-learning-based identification card authenticity verification apparatus and method
CN111340008B (en) Method and system for generation of counterpatch, training of detection model and defense of counterpatch
CN110766033B (en) Image processing method, image processing device, electronic equipment and storage medium
US11004204B2 (en) Segmentation-based damage detection
CN109635053B (en) Map quality inspection method, device, system and storage medium
CN106874183A (en) Service exception detection method and device
CN111080633A (en) Screen defect detection method and device, terminal equipment and storage medium
CN111351789B (en) Method, system and electronic device for detecting/maintaining equipment
CN111325128A (en) Illegal operation detection method and device, computer equipment and storage medium
CN111415336B (en) Image tampering identification method, device, server and storage medium
CN110717449A (en) Vehicle annual inspection personnel behavior detection method and device and computer equipment
CN113361498B (en) Remote judgment and repair method and system for smart city front-end fault equipment
CN113722397B (en) Electronic evidence collection system and method for big data audit
WO2021017316A1 (en) Residual network-based information recognition method, apparatus, and computer device
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN110781887A (en) License plate screw detection method and device and computer equipment
CN113918609A (en) Test paper creating method and device, computer equipment and storage medium
CN113706513A (en) Vehicle damage image analysis method, device, equipment and medium based on image detection
KR20230036179A (en) Construction management method for railway infrastructure using bim and ar
CN112836041A (en) Personnel relationship analysis method, device, equipment and storage medium
JP2019106052A (en) Valuation assisting system, server device of the same, terminal, and valuation assisting method
CN107678975A (en) A kind of software fault detection method and device
CN110728680A (en) Automobile data recorder detection method and device, computer equipment and storage medium
CN111079506A (en) Augmented reality-based information acquisition method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A method and system for remote judgment and repair of front-end failure equipment in smart cities

Effective date of registration: 20221028

Granted publication date: 20211109

Pledgee: Bank of China Limited Wuhan Jianghan sub branch

Pledgor: Jingwang Technology Co.,Ltd.

Registration number: Y2022420000356