CN116890668B - Safe charging method and charging device for information synchronous interconnection - Google Patents

Safe charging method and charging device for information synchronous interconnection Download PDF

Info

Publication number
CN116890668B
CN116890668B CN202311146180.XA CN202311146180A CN116890668B CN 116890668 B CN116890668 B CN 116890668B CN 202311146180 A CN202311146180 A CN 202311146180A CN 116890668 B CN116890668 B CN 116890668B
Authority
CN
China
Prior art keywords
video
identification
coordinate
acquisition
ordering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311146180.XA
Other languages
Chinese (zh)
Other versions
CN116890668A (en
Inventor
徐川子
杨玉强
胡若云
姚冰峰
王伟峰
郭大琦
茹传红
夏霖
马笛
洪潇
秦建
李题印
罗扬帆
冯涛
张驰
陈奕
何岳昊
周波
夏旭华
陈识微
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Zhejiang Electric Power Co Ltd Hangzhou Fuyang District Power Supply Co
Zhejiang University ZJU
State Grid Zhejiang Electric Power Co Ltd
Hangzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Taizhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
State Grid Zhejiang Electric Power Co Ltd Hangzhou Fuyang District Power Supply Co
Zhejiang University ZJU
State Grid Zhejiang Electric Power Co Ltd
Hangzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Taizhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Zhejiang Electric Power Co Ltd Hangzhou Fuyang District Power Supply Co, Zhejiang University ZJU, State Grid Zhejiang Electric Power Co Ltd, Hangzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd, Taizhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical State Grid Zhejiang Electric Power Co Ltd Hangzhou Fuyang District Power Supply Co
Priority to CN202311146180.XA priority Critical patent/CN116890668B/en
Publication of CN116890668A publication Critical patent/CN116890668A/en
Application granted granted Critical
Publication of CN116890668B publication Critical patent/CN116890668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L53/00Methods of charging batteries, specially adapted for electric vehicles; Charging stations or on-board charging equipment therefor; Exchange of energy storage elements in electric vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L53/00Methods of charging batteries, specially adapted for electric vehicles; Charging stations or on-board charging equipment therefor; Exchange of energy storage elements in electric vehicles
    • B60L53/60Monitoring or controlling charging stations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L53/00Methods of charging batteries, specially adapted for electric vehicles; Charging stations or on-board charging equipment therefor; Exchange of energy storage elements in electric vehicles
    • B60L53/60Monitoring or controlling charging stations
    • B60L53/66Data transfer between charging stations and vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/70Energy storage systems for electromobility, e.g. batteries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/7072Electromobility specific charging systems or methods for batteries, ultracapacitors, supercapacitors or double-layer capacitors
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02T90/10Technologies relating to charging of electric vehicles
    • Y02T90/12Electric charging stations

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a safe charging method and a charging device for information synchronous interconnection, comprising the following steps: the camera shooting end responds to the first acquisition instruction to acquire video and takes the moment of receiving the first acquisition instruction as the initial acquisition moment; the camera end responds to the second acquisition instruction to acquire and identify the video and takes the moment of receiving the second acquisition instruction as the identification acquisition moment; if the identification monitoring area is judged to have the corresponding identification target, the identification target is sent to the server, and the corresponding time is used as the reminding acquisition time; if the safety reminding condition is judged to be met, starting to synchronously identify the video of the monitoring area to the request end based on the reminding acquisition time and controlling the reminding device to respond; and synchronizing the video after processing to the request end based on the start acquisition time, the identification acquisition time, the termination acquisition time and/or the reminding acquisition time.

Description

Safe charging method and charging device for information synchronous interconnection
Technical Field
The present invention relates to data processing technologies, and in particular, to a secure charging method and a secure charging device for information synchronization interconnection.
Background
The current automobile industry has a new trend of future development by using new renewable energy to replace traditional fossil fuel, and compared with the traditional fuel automobile, the electric automobile has the characteristics of green color, energy conservation, environmental protection and high efficiency, and is a new energy automobile industry which is rapidly developing.
At present, when an electric automobile is charged, some abnormal conditions often occur, for example, other users pull out a charging gun which is being charged, touch the automobile and the like, so that the charging process is unstable, and when a problem occurs, the user cannot obtain customized traceability data and cannot obtain effective reminding.
Therefore, how to combine the monitoring data to obtain the customized traceability data of the user and effectively remind the user becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides a safe charging method and a safe charging device for information synchronous interconnection, which can be used for obtaining customized traceability data of a user by combining monitoring data and effectively reminding the user.
In a first aspect of the embodiment of the present invention, a secure charging method for information synchronization interconnection is provided, including:
the method comprises the steps that after a server receives a charging request sent by a request end, a first acquisition instruction is sent to a camera end corresponding to a charging pile, the camera end responds to the first acquisition instruction to perform video acquisition, and the moment of receiving the first acquisition instruction is taken as an initial acquisition moment;
the server synchronizes to the camera end to send a second acquisition instruction after receiving a charging start signal sent by the charging pile, the camera end responds to the second acquisition instruction to carry out video acquisition and identification, and the moment of receiving the second acquisition instruction is taken as an identification acquisition moment;
The camera shooting end determines a recognition monitoring area according to the video acquired by the position of the charging pile at the second acquisition time, if the recognition monitoring area is judged to have a corresponding recognition target, the recognition target is sent to the server, and the corresponding time is used as a reminding acquisition time;
the server performs safety analysis by combining the charging pile information after receiving the identification target, and if the safety reminding condition is judged to be met, starts to synchronously identify the video of the monitoring area to the request terminal based on the reminding acquisition time and controls the reminding device to respond;
and if the server receives a video calling instruction of the user, acquiring the video acquired by the camera end, and synchronizing the video to the request end after processing the video based on the initial acquisition time, the identification acquisition time, the termination acquisition time and/or the reminding acquisition time.
Optionally, in one possible implementation manner of the first aspect, the determining, by the camera, the identification monitoring area in the video acquired at the second acquisition time according to the position of the charging pile, and if it is determined that the identification monitoring area has a corresponding identification target, sending the identification target to the server, and taking the corresponding time as the reminding acquisition time, where the determining includes:
the camera end comprises at least one camera and an edge computing gateway connected with the camera;
The edge computing gateway determines a corresponding preset camera according to the position of the charging pile, obtains a target pixel point which is a first preset pixel value in the video of the camera, obtains a plurality of recognition contour ranges, and sequences the plurality of preset recognition contour ranges to obtain corresponding contour numbers;
and determining preset numbers corresponding to the charging piles and the cameras, and selecting an identification contour range of the contour number corresponding to the preset number to determine an identification monitoring area.
Optionally, in one possible implementation manner of the first aspect, the determining, by the edge computing gateway, a corresponding preset camera according to a position of the charging pile, obtaining a target pixel point that is a first preset pixel value in a video of the camera, to obtain a plurality of recognition contour ranges, and sorting the plurality of preset recognition contour ranges to obtain corresponding contour numbers includes:
the edge computing gateway determines a preset camera corresponding to the charging piles according to a preset corresponding table, wherein the preset corresponding table is provided with a corresponding relation between each charging pile and the camera and a preset number corresponding to the charging pile and the camera;
identifying target pixel points which are first preset pixel values in the video of the camera, and classifying all target pixel points which are directly adjacent or indirectly adjacent through other target pixel points into a first pixel point set;
Reserving first pixel point sets with the number larger than or equal to the number of the preset pixel points, sequencing the reserved first pixel point sets to obtain a first sequence, and adding corresponding contour numbers to preset identification contours formed by each first pixel point set according to the sequence of the first sequence.
Optionally, in one possible implementation manner of the first aspect, the reserving a first pixel set greater than or equal to a preset number of pixels and sorting the reserved first pixel set to obtain a first sequence, adding a corresponding contour number to a preset identification contour formed by each first pixel set according to an order of the first sequence, includes:
determining an image ordering direction corresponding to the video, and determining a corresponding coordinate ordering direction according to the image ordering direction;
and determining the center coordinates of the corresponding preset identification contour according to the coordinates of each target pixel point in the first pixel point set, determining the ordering coordinate value corresponding to the center coordinates based on the coordinate ordering direction, and ordering the first pixel point set corresponding to the ordering coordinate value according to the coordinate ordering direction to obtain a first sequence.
Optionally, in one possible implementation manner of the first aspect, the determining an image ordering direction corresponding to the video, and determining a corresponding coordinate ordering direction according to the image ordering direction, includes:
Carrying out coordinated processing on images in the video to obtain a corresponding coordinate system, wherein the coordinate system comprises coordinate axes and coordinate values corresponding to each pixel point;
and if the image ordering direction corresponding to the video is judged to correspond to the coordinate axis of the abscissa, taking the abscissa axis as the coordinate ordering direction, and if the image ordering direction corresponding to the video is judged to correspond to the coordinate axis of the ordinate, taking the ordinate axis as the coordinate ordering direction.
Optionally, in one possible implementation manner of the first aspect, the determining, according to the coordinates of each target pixel point in the first set of pixel points, a center coordinate of a corresponding preset recognition contour, determining, based on the coordinate sorting direction, a sorting coordinate value corresponding to the center coordinate, and sorting, according to the coordinate sorting direction, the first set of pixel points corresponding to the sorting coordinate value to obtain the first sequence includes:
if the coordinate ordering direction is the abscissa axis, acquiring the abscissa of each target pixel point in the first pixel point set, determining an abscissa maximum value and an abscissa minimum value, and ordering coordinate values of the central coordinate according to the abscissa maximum value and the abscissa minimum value;
if the coordinate ordering direction is the abscissa axis forward direction, ordering the first pixel point set from small to large according to the ordering coordinate value to obtain a first sequence;
And if the coordinate ordering direction is the negative direction of the abscissa axis, ordering the first pixel point set according to the ordering coordinate value from big to small to obtain a first sequence.
Optionally, in one possible implementation manner of the first aspect, the determining, according to the coordinates of each target pixel point in the first set of pixel points, a center coordinate of a corresponding preset recognition contour, determining, based on the coordinate sorting direction, a sorting coordinate value corresponding to the center coordinate, and sorting, according to the coordinate sorting direction, the first set of pixel points corresponding to the sorting coordinate value to obtain the first sequence includes:
if the coordinate ordering direction is the ordinate axis, acquiring the ordinate of each target pixel point in the first pixel point set, determining an ordinate maximum value and an ordinate minimum value, and ordering coordinate values of the central coordinate according to the ordinate maximum value and the ordinate minimum value;
if the coordinate ordering direction is the positive direction of the ordinate axis, ordering the first pixel point set from small to large according to the ordering coordinate value to obtain a first sequence;
and if the coordinate ordering direction is the opposite direction of the ordinate axis, ordering the first pixel point set according to the ordering coordinate value from big to small to obtain a first sequence.
Optionally, in one possible implementation manner of the first aspect, the sending, if it is determined that the identification monitoring area has a corresponding identification target, the identification target to the server, and taking the corresponding time as the alert collection time, includes:
acquiring an image in a recognition monitoring area, and determining a recognition target based on OPENCV recognition, wherein the recognition target at least comprises a person;
if a person exists in the identification monitoring area or the outline of the person is judged to be smaller than the preset distance from the outline of the vehicle, the corresponding identification target is sent to the server, and the server takes the moment of receiving the identification target as the reminding acquisition moment.
Optionally, in one possible implementation manner of the first aspect, the server performs security analysis in combination with charging pile information after receiving the identification target, and if it is determined that the security alert condition is met, starts to identify the video of the monitoring area to the request end synchronously based on the alert collection time, including:
the server acquires vibration information of the charging gun in the charging pile information after receiving the identification target, and judges that a safety reminding condition is met if the vibration information reaches a preset vibration value;
and intercepting the video image in the identification monitoring area after the reminding acquisition time and synchronizing the video image with the request end.
Optionally, in one possible implementation manner of the first aspect, the server acquires the video acquired by the camera if receiving the video call instruction of the user, and synchronizes to the request end after processing the video based on the start acquisition time, the identification acquisition time, the termination acquisition time and/or the reminding acquisition time, including:
if a video calling instruction of a user is received, a server acquires a video acquired by a camera end to generate a corresponding video time axis, and determines the moment corresponding to each image frame in the video time axis;
determining corresponding image frames as separation frames according to the initial acquisition time, the identification acquisition time, the termination acquisition time and/or the reminding acquisition time, and displaying the separation frames in a node form in a video time axis;
labels corresponding to the starting acquisition time, the identifying acquisition time, the ending acquisition time and/or the reminding acquisition time are respectively added in the video time axis, and the corresponding desensitization treatment is carried out on the videos corresponding to different time axes according to the different labels;
and synchronizing the desensitized video and the corresponding video axis to the request end.
Optionally, in a possible implementation manner of the first aspect, the desensitizing processing of the video corresponding to different time axes according to different labels in a corresponding manner includes:
If the image frames of the corresponding video segments in the time axis are judged to be corresponding identification acquisition moments, acquiring the human body contours corresponding to the corresponding image frames, and acquiring contour edge pixel points corresponding to the human body contours;
determining coordinates of all pixel points to be desensitized in the human body contour according to the contour edge pixel points to obtain a first coordinate set to be desensitized;
and carrying out binarization processing on all pixel points in the first coordinate set to be desensitized according to a preset binarization scheme to obtain desensitized image frames and corresponding video segments.
Optionally, in a possible implementation manner of the first aspect, the desensitizing processing of the video corresponding to different time axes according to different labels in a corresponding manner includes:
if the image frames of the corresponding video segments in the time axis are judged to be corresponding reminding acquisition time, acquiring the human body contour corresponding to the corresponding image frames, and acquiring contour edge pixel points corresponding to the human body contour;
identifying the neck of the human body contour to obtain a neck separation line, separating contour edge pixel points based on the neck separation line, and determining the area of the neck separation line facing the head corresponding to the contour edge pixel points to obtain a second coordinate set to be desensitized;
And carrying out binarization processing on all pixel points in the second coordinate set to be desensitized according to a preset binarization scheme to obtain a desensitized image frame and a corresponding video segment.
In a second aspect of the embodiment of the present invention, there is provided a secure charging apparatus for information synchronization interconnection, including:
the acquisition module is used for enabling the server to send a first acquisition instruction to the corresponding camera end at the charging pile after receiving the charging request sent by the request end, and the camera end responds to the first acquisition instruction to carry out video acquisition and takes the moment of receiving the first acquisition instruction as the initial acquisition moment;
the identification module is used for enabling the server to synchronously send a second acquisition instruction to the camera end after receiving a charging start signal sent by the charging pile, and the camera end responds to the second acquisition instruction to carry out video acquisition and identification and takes the moment of receiving the second acquisition instruction as an identification acquisition moment;
the determining module is used for enabling the camera end to determine an identification monitoring area in the video acquired at the second acquisition time according to the position of the charging pile, if the identification monitoring area is judged to have a corresponding identification target, the identification monitoring area is sent to the server, and the corresponding time is used as a reminding acquisition time;
The analysis module is used for enabling the server to perform safety analysis by combining the charging pile information after receiving the identification target, and if the safety reminding condition is judged to be met, starting to synchronously identify the video of the monitoring area to the request terminal based on the reminding acquisition time;
the synchronization module is used for enabling the server to acquire the video acquired by the camera end if receiving the video calling instruction of the user, and synchronizing the video to the request end after processing the video based on the starting acquisition time, the identification acquisition time, the ending acquisition time and/or the reminding acquisition time.
In a third aspect of an embodiment of the present invention, there is provided an electronic device including: a memory, a processor and a computer program stored in the memory, the processor running the computer program to perform the first aspect of the invention and the methods that the first aspect may relate to.
In a fourth aspect of embodiments of the present invention, there is provided a storage medium having stored therein a computer program for implementing the method of the first aspect and the various possible aspects of the first aspect when executed by a processor.
Advantageous effects
1. According to the scheme, the camera end arranged at the charging pile is used for data acquisition, and synchronous data of the corresponding request end are obtained so as to monitor the charging process of the request end. When the data are collected, the method can combine different instructions at the request end to determine a plurality of corresponding data moments, wherein the corresponding data moments comprise initial collection moments for initial collection, recognition collection moments corresponding to the second collection instructions, reminding collection moments when targets are recognized, and corresponding termination collection moments, so that corresponding data are obtained. Meanwhile, the scheme can carry out safety analysis on the information of the charging pile, when the safety reminding condition is achieved, the video of the monitoring area is synchronously identified to the request end and the reminding device is controlled to respond based on the reminding acquisition time, and by the mode, the scheme can obtain customized traceability data of a user by combining the monitoring data on the basis of the charging pile, effectively reminds the user and improves the charging stability.
2. In the scheme, a plurality of charging areas corresponding to one camera end are considered, and in order to obtain data of corresponding request ends, the areas are identified and corresponding. Firstly, the scheme can be combined with a target pixel point to obtain a plurality of identification contour ranges, and then the identification contour ranges corresponding to the contour numbers are selected to be identified as the identification monitoring areas by combining the preset numbers. When numbering is carried out, determining an image ordering direction corresponding to the video, determining a corresponding coordinate ordering direction in combination with the root image ordering direction, and ordering a first pixel point set corresponding to the ordering coordinate value according to the coordinate ordering direction to obtain a first sequence. The scheme combines different coordinate ordering directions to determine the first sequences in different modes, so that the correspondence of numbers is realized.
3. The scheme can be used for carrying out desensitization processing on videos corresponding to different time axes in a corresponding mode according to different labels. When desensitization is carried out, if the image frames of the corresponding video segments in the time axis are judged to be corresponding identification acquisition moments, the scheme can combine contour edge pixel points to determine coordinates of all pixel points to be desensitized in the human body contour, a first coordinate set to be desensitized is obtained, binarization processing is carried out on all the pixel points in the first coordinate set to be desensitized according to a preset binarization scheme, and the image frames after desensitization and the corresponding video segments are obtained, namely all the desensitization is carried out. If the image frames of the corresponding video segments in the time axis are judged to be corresponding reminding acquisition moments, corresponding contour edge pixel points are obtained, the neck of the human body contour is identified to obtain neck separation lines, the contour edge pixel points are separated by combining the neck separation lines, a second coordinate set to be desensitized is obtained by determining the area of the neck separation lines facing the head corresponding to the contour edge pixel points, binarization processing is carried out on all the pixel points in the second coordinate set to be desensitized according to a preset binarization scheme, image frames after desensitization and corresponding video segments are obtained, and partial desensitization processing is carried out on user data.
Drawings
Fig. 1 is a schematic flow chart of a secure charging method for information synchronization interconnection provided by an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a secure charging device with information synchronization interconnection according to an embodiment of the present invention.
Description of the embodiments
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein.
It should be understood that, in various embodiments of the present invention, the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present invention, "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements that are expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present invention, "plurality" means two or more. "and/or" is merely an association relationship describing an association object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "comprising A, B and C", "comprising A, B, C" means that all three of A, B, C comprise, "comprising A, B or C" means that one of the three comprises A, B, C, and "comprising A, B and/or C" means that any 1 or any 2 or 3 of the three comprises A, B, C.
It should be understood that in the present invention, "B corresponding to a", "a corresponding to B", or "B corresponding to a" means that B is associated with a, from which B can be determined. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information. The matching of A and B is that the similarity of A and B is larger than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to detection" depending on the context.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Firstly, a scene of the scheme is explained, the scene comprises a charging pile and a request end which interacts with the charging pile, the request end can be a mobile phone terminal held by a charger, a camera end is arranged at the charging pile, one camera end corresponds to a plurality of charging piles, namely a camera is used for monitoring a plurality of charging areas at the same time, and a server is used for processing data, and a specific processing mode is explained below.
Referring to fig. 1, a flow chart of a secure charging method for information synchronization interconnection provided by an embodiment of the present invention includes S1-S5:
s1, after receiving a charging request sent by a request end, a server sends a first acquisition instruction to a corresponding camera end at a charging pile, and the camera end responds to the first acquisition instruction to perform video acquisition and takes the moment of receiving the first acquisition instruction as the initial acquisition moment.
When a user needs to charge, a charging request can be sent to a server through a request end, and the server can send a first acquisition instruction to a corresponding camera end at a charging pile after receiving the charging request sent by the request end.
After receiving the first acquisition instruction, the camera end responds to the first acquisition instruction to acquire video and takes the moment of receiving the first acquisition instruction as the initial acquisition moment. It can be understood that at this time, the image capturing end starts data acquisition.
S2, after receiving a charging start signal sent by the charging pile, the server synchronizes to the camera end to send a second acquisition instruction, and the camera end responds to the second acquisition instruction to perform video acquisition and identification and takes the moment of receiving the second acquisition instruction as an identification acquisition moment.
It is worth mentioning that the initial collection time is not yet subjected to the charging operation, and when the user inserts the charging gun into the electric automobile, the charging pile can generate a charging start signal.
After receiving a charging start signal sent by the charging pile, the server synchronizes to the camera end to send a second acquisition instruction.
At the moment, the camera end responds to the second acquisition instruction to carry out video acquisition and identification, and the moment of receiving the second acquisition instruction is taken as the identification acquisition moment.
It is understood that the time of recognition acquisition refers to the time when real-time recognition analysis is required for the acquired data, which is different from the time of initial acquisition.
And S3, determining a recognition monitoring area by the camera end according to the video acquired by the position of the charging pile at the second acquisition time, if the recognition monitoring area is judged to have a corresponding recognition target, transmitting the recognition target to the server, and taking the corresponding time as a reminding acquisition time.
Because one camera end corresponds to each monitoring area, the area determination is needed first in the scheme. The camera shooting end can confirm and identify the monitoring area according to the video acquired by the position of the charging pile at the second acquisition time.
If the identification monitoring area is judged to have the corresponding identification target, the identification target is sent to the server, and the corresponding time is used as the reminding acquisition time.
In the above embodiment, if it is determined that the identification monitoring area has the corresponding identification target, the identification target is sent to the server, and the corresponding time is used as the reminding acquisition time, including:
the image in the identification monitoring area is acquired, and an identification target is determined based on OPENCV identification, wherein the identification target at least comprises a person. It will be appreciated that the above-described object recognition may take part in the prior art and will not be described in detail. The OPENCV technology is a target recognition technology commonly used in the prior art, and the determination of the target can be performed by recognizing the outline of the target, which will not be described in detail.
If a person exists in the identification monitoring area or the outline of the person is judged to be smaller than the preset distance from the outline of the vehicle, the corresponding identification target is sent to the server, and the server takes the moment of receiving the identification target as the reminding acquisition moment.
In some cases, if a person exists in the identification monitoring area or the outline of the person is judged to be smaller than the preset distance from the outline of the vehicle, it is indicated that an abnormal condition may exist, at this time, the corresponding identification target is sent to the server, and the server takes the time of receiving the identification target as the reminding acquisition time.
It is worth mentioning that the reminding collection time is the time for indicating the abnormal situation and is used for striking reminding.
In some embodiments, S3 (the camera end determines the identification monitoring area in the video acquired at the second acquisition time according to the position of the charging pile, if it is determined that the identification monitoring area has a corresponding identification target, the identification target is sent to the server, and the corresponding time is used as the reminding acquisition time) includes S31-S33:
s31, the camera end comprises at least one camera and an edge computing gateway connected with the camera.
The camera end of the scheme comprises at least one camera and an edge computing gateway connected with the camera, and the edge computing gateway can conduct edge analysis on data to obtain an analysis result.
S32, the edge computing gateway determines a corresponding preset camera according to the position of the charging pile, obtains a target pixel point which is a first preset pixel value in the video of the camera, obtains a plurality of recognition contour ranges, and sorts the plurality of preset recognition contour ranges to obtain corresponding contour numbers.
The edge computing gateway can determine a corresponding preset camera by combining the position of the charging pile, and then determine that a video acquired by the camera is a target pixel point with a first preset pixel value to obtain a plurality of recognition contour ranges. It can be understood that each charging area generally has a marking line, for example, a white square marking line, and the target pixel point of the first preset pixel value at this time may be a pixel point corresponding to white.
After obtaining a plurality of identification contour ranges, the scheme can sort the plurality of preset identification contour ranges to obtain corresponding contour numbers, and the specific mode is described below.
The edge computing gateway determines a corresponding preset camera according to the position of the charging pile, obtains a target pixel point which is a first preset pixel value in the video of the camera, obtains a plurality of recognition contour ranges, and sequences the plurality of preset recognition contour ranges to obtain corresponding contour numbers, wherein the step of sequencing the plurality of preset recognition contour ranges to obtain the corresponding contour numbers comprises S321-S323:
s321, the edge computing gateway determines a preset camera corresponding to the charging piles according to a preset corresponding table, wherein the preset corresponding table is provided with a corresponding relation between each charging pile and the camera and a preset number corresponding to the charging pile and the camera.
According to the scheme, the edge computing gateway can determine the preset camera corresponding to the charging pile according to the preset correspondence table.
The preset corresponding table is provided with a corresponding relation between each charging pile and the camera and preset numbers corresponding to the charging piles and the camera.
S322, identifying target pixel points which are first preset pixel values in the video of the camera, and classifying all target pixel points which are directly adjacent or indirectly adjacent through other target pixel points into a first pixel point set.
The method can identify target pixel points with a first preset pixel value in the video of the camera, and classify all target pixel points directly adjacent to each other or indirectly adjacent to each other through other target pixel points into a first pixel point set. It can be understood that the pixels of one parking space identification line are often adjacent, so that the scheme can obtain a plurality of first pixel sets through the mode.
S323, reserving first pixel point sets with the number larger than or equal to that of the preset pixel points, sequencing the reserved first pixel point sets to obtain a first sequence, and adding corresponding contour numbers to preset identification contours formed by each first pixel point set according to the sequence of the first sequence.
In order to remove interference, the method sets the number of preset pixels, reserves a first pixel set which is larger than or equal to the number of the preset pixels, and removes the interference.
Meanwhile, the first sequence is obtained by sequencing the reserved first pixel point sets, and then corresponding contour numbers are added to preset identification contours formed by each first pixel point set according to the sequence of the first sequence.
In some embodiments, reserving a first pixel set greater than or equal to the number of preset pixels and sorting the reserved first pixel set to obtain a first sequence, adding a corresponding profile number to a preset identification profile formed by each first pixel set according to the sequence of the first sequence, including S3231-S3232:
S3231, determining an image ordering direction corresponding to the video, and determining a corresponding coordinate ordering direction according to the image ordering direction.
Because the position of the camera is fixed, the scheme firstly determines the image ordering direction corresponding to the video, and determines the corresponding coordinate ordering direction by combining the image ordering direction.
The determining the image sorting direction corresponding to the video, and determining the corresponding coordinate sorting direction according to the image sorting direction includes:
and carrying out coordinated processing on the image in the video to obtain a corresponding coordinate system, wherein the coordinate system comprises coordinate axes and coordinate values corresponding to each pixel point.
It can be understood that the scheme performs the coordinated processing on the image in the video to obtain a corresponding coordinate system, where the coordinate system includes coordinate axes and coordinate values corresponding to each pixel point.
And if the image ordering direction corresponding to the video is judged to correspond to the coordinate axis of the abscissa, taking the abscissa axis as the coordinate ordering direction, and if the image ordering direction corresponding to the video is judged to correspond to the coordinate axis of the ordinate, taking the ordinate axis as the coordinate ordering direction.
If it is determined that the image ordering direction corresponding to the video corresponds to the coordinate axis of the abscissa, for example, the image ordering direction is a transverse ordering, the method takes the abscissa axis as the coordinate ordering direction.
If it is determined that the image ordering direction corresponding to the video corresponds to the coordinate axis of the ordinate, for example, the image ordering direction is the longitudinal ordering, the solution takes the ordinate axis as the coordinate ordering direction.
S3232, according to the coordinates of each target pixel point in the first pixel point set, determining the center coordinates of the corresponding preset recognition outline, determining the sorting coordinate values corresponding to the center coordinates based on the coordinate sorting direction, and sorting the first pixel point set corresponding to the sorting coordinate values according to the coordinate sorting direction to obtain a first sequence.
The scheme can combine the coordinates of each target pixel point in the first pixel point set to determine the center coordinates of the corresponding preset identification outline.
After the center coordinates are obtained, the scheme can be used for determining the ordering coordinate values corresponding to the center coordinates by combining the coordinate ordering directions, and then ordering the first pixel point set corresponding to the ordering coordinate values according to the coordinate ordering directions to obtain a first sequence. That is, the scheme sorts the images based on the center coordinates.
In some embodiments, determining a center coordinate of a corresponding preset recognition contour according to a coordinate of each target pixel point in the first pixel point set, determining a sorting coordinate value corresponding to the center coordinate based on the coordinate sorting direction, and sorting the first pixel point set corresponding to the sorting coordinate value according to the coordinate sorting direction to obtain a first sequence, where the first sequence includes:
If the coordinate sorting direction is the abscissa axis, acquiring the abscissa of each target pixel point in the first pixel point set, determining the abscissa maximum value and the abscissa minimum value, and sorting the coordinate values of the central coordinate according to the abscissa maximum value and the abscissa minimum value.
If the coordinate sorting direction is the abscissa axis, the method can acquire the abscissa of each target pixel point in the first pixel point set, determine the abscissa maximum value and the abscissa minimum value, and calculate the sorting coordinate value of the center coordinate according to the abscissa maximum value and the abscissa minimum value, namely calculate the abscissa value of the center coordinate in the abscissa axis direction as the corresponding sorting coordinate value.
And if the coordinate ordering direction is the forward direction of the abscissa axis, ordering the first pixel point set from small to large according to the ordering coordinate value to obtain a first sequence.
If the coordinate sorting direction is the forward direction of the abscissa axis, the first sequence is obtained by sorting the first pixel point set from small to large according to the sorting coordinate value, for example, the first sequence is obtained by sorting from left to right.
And if the coordinate ordering direction is the negative direction of the abscissa axis, ordering the first pixel point set according to the ordering coordinate value from big to small to obtain a first sequence.
If the coordinate sorting direction is the negative direction of the abscissa axis, the first pixel point set is sorted according to the sorting coordinate value from large to small to obtain a first sequence, for example, the first sequence is sorted from right to left.
In other embodiments, determining a center coordinate of a corresponding preset recognition contour according to a coordinate of each target pixel point in the first pixel point set, determining a sorting coordinate value corresponding to the center coordinate based on the coordinate sorting direction, and sorting the first pixel point set corresponding to the sorting coordinate value according to the coordinate sorting direction to obtain a first sequence, where the first sequence includes:
if the coordinate ordering direction is the ordinate axis, acquiring the ordinate of each target pixel point in the first pixel point set, determining an ordinate maximum value and an ordinate minimum value, and ordering coordinate values of the central coordinate according to the ordinate maximum value and the ordinate minimum value.
If the coordinate sorting direction is the ordinate axis, the method can acquire the ordinate of each target pixel point in the first pixel point set, determine the ordinate maximum value and the ordinate minimum value, and calculate the sorting coordinate value of the central coordinate according to the ordinate maximum value and the ordinate minimum value, namely calculate the ordinate value of the central coordinate in the ordinate axis direction as the corresponding sorting coordinate value.
And if the coordinate ordering direction is the positive direction of the ordinate axis, ordering the first pixel point set from small to large according to the ordering coordinate value to obtain a first sequence.
If the coordinate sorting direction is the forward direction of the ordinate axis, the first sequence is obtained by sorting the first pixel point set from small to large according to the sorting coordinate value, for example, the first sequence is obtained by sorting from bottom to top.
And if the coordinate ordering direction is the opposite direction of the ordinate axis, ordering the first pixel point set according to the ordering coordinate value from big to small to obtain a first sequence.
If the coordinate sorting direction is the negative direction of the ordinate axis, the first pixel point set is sorted according to the sorting coordinate value from large to small to obtain a first sequence, for example, the first sequence is sorted from top to bottom.
S33, determining preset numbers corresponding to the charging piles and the cameras, and selecting the identification contour range of the contour number corresponding to the preset number to be determined as an identification monitoring area.
The scheme can determine the preset numbers corresponding to the corresponding charging piles and the cameras, and then selects the identification contour range of the contour number corresponding to the preset number to be determined as the identification monitoring area.
S4, the server performs safety analysis by combining the charging pile information after receiving the identification target, and if the safety reminding condition is judged to be met, the server starts to synchronously identify the video of the monitoring area to the request end based on the reminding acquisition time and controls the reminding device to respond;
after receiving the identification target, the server can combine the charging pile information to carry out safety analysis.
If the safety reminding condition is judged to be met, reminding is required, the scheme starts to synchronously identify the video of the monitoring area to the request end based on the reminding acquisition time, and the reminding device is controlled to respond;
the server performs security analysis by combining charging pile information after receiving the identification target, and starts to synchronously identify the video of the monitoring area to the request end and control the reminding device to respond based on the reminding acquisition time if the security reminding condition is judged to be reached, and the method comprises the following steps of S41-S42:
s41, the server acquires vibration information of the charging gun in the charging pile information after receiving the identification target, and if the vibration information reaches a preset vibration value, the server judges that a safety reminding condition is reached.
When other users touch the charging gun, the charging pile can detect vibration information, the server can monitor the vibration information of the charging gun in the charging pile information after receiving the identification target, if the vibration information reaches a preset vibration value, the abnormal condition is indicated, and at the moment, the safety reminding condition can be judged to be met. The vibration value can be vibration amplitude, and when other users touch the charging gun forcefully, the vibration amplitude can be increased.
S42, capturing the video image in the identification monitoring area after the reminding acquisition time and synchronizing the video image with the request end.
After reminding the acquisition time, the video image in the identification monitoring area is intercepted to obtain related data, and the data synchronization is carried out on the request end to timely remind.
And S5, if the server receives a video calling instruction of the user, acquiring the video acquired by the camera end, and synchronizing the video to the request end after processing the video based on the initial acquisition time, the identification acquisition time, the termination acquisition time and/or the reminding acquisition time.
If the server receives a video calling instruction of the user, and the user has a viewing request, the video processing method can acquire the video acquired by the camera end, and process the video in combination with the initial acquisition time, the identification acquisition time, the termination acquisition time and/or the reminding acquisition time and then synchronize the video to the request end.
In some embodiments, if the server receives a video call instruction from the user, the server acquires a video acquired by the camera, processes the video based on the start acquisition time, the identification acquisition time, the end acquisition time and/or the reminding acquisition time, and synchronizes to the request end, including S51-S54:
S51, if a video calling instruction of a user is received, the server acquires videos acquired by the camera end to generate a corresponding video time axis, and determines the moment corresponding to each image frame in the video time axis.
If a server receives a video calling instruction of a user, the scheme can acquire videos acquired by a camera end to generate a corresponding video time axis, and then the moment corresponding to each image frame in the video time axis is determined.
S52, determining corresponding image frames as separation frames according to the initial acquisition time, the identification acquisition time, the termination acquisition time and/or the reminding acquisition time, and displaying the separation frames in a node form in a video time axis.
The scheme can find out corresponding image frames to serve as separation frames by taking the starting acquisition time, the identifying acquisition time, the ending acquisition time and/or the reminding acquisition time as references, and then the separation frames are displayed in a node form in a video time axis.
S53, labels corresponding to the start acquisition time, the identification acquisition time, the termination acquisition time and/or the reminding acquisition time are respectively added in the video time axis, and the desensitization processing of corresponding modes is carried out on videos corresponding to different time axes according to different labels.
The method comprises the steps of respectively adding labels corresponding to the starting acquisition time, the identifying acquisition time, the ending acquisition time and/or the reminding acquisition time in a video time axis, and then carrying out desensitization processing in a corresponding mode on videos corresponding to different time axes by combining different labels. It should be noted that at different moments, the corresponding requirements are different, so the scheme can combine the different labels to perform the desensitization processing on the videos corresponding to different time axes in a corresponding manner.
S54, synchronizing the desensitized video and the corresponding video axis to the request end.
In the above embodiment, the desensitization processing of the corresponding video corresponding to different time axes according to the labels includes S531-S533:
and S531, if the image frames of the corresponding video segments in the time axis are judged to be the corresponding identification acquisition time, acquiring the human body contour corresponding to the corresponding image frames, and acquiring contour edge pixel points corresponding to the human body contour.
If the image frames of the corresponding video segments in the time axis are judged to be the corresponding identification acquisition time, the corresponding identification start time is indicated, and at the moment, the scheme can acquire the human body contour corresponding to the corresponding image frames and then acquire contour edge pixel points corresponding to the human body contour.
S532, the coordinates of all pixel points to be desensitized in the human body outline are determined according to the outline edge pixel points, and a first coordinate set to be desensitized is obtained.
The method can be used for determining the coordinates of all pixel points to be desensitized in the human body outline by combining the outline edge pixel points to obtain a first coordinate set to be desensitized.
S533, binarizing all pixel points in the first coordinate set to be desensitized according to a preset binarization scheme to obtain a desensitized image frame and a corresponding video segment.
After the first coordinate set to be desensitized is obtained, the scheme carries out binarization processing on all pixel points in the first coordinate set to be desensitized according to a preset binarization scheme to obtain desensitized image frames and corresponding video segments. The preset binarization scheme can be used for binarizing the black or white color to completely desensitize the user.
In some embodiments, the desensitization processing of the video corresponding to different time axes according to the labels in a corresponding manner comprises S534-S536:
and S534, if the image frames of the corresponding video segments in the time axis are judged to be corresponding reminding acquisition time, acquiring the human body contour corresponding to the corresponding image frames, and acquiring contour edge pixel points corresponding to the human body contour.
If the image frames of the corresponding video segments in the time axis are corresponding reminding acquisition time, the abnormal degree is higher, and at the moment, the scheme can acquire the human body contour corresponding to the corresponding image frames and then acquire contour edge pixel points corresponding to the human body contour.
And S535, identifying the neck of the human body contour to obtain a neck separation line, and determining the area of the neck separation line, which faces the head corresponding to the contour edge pixel point, to obtain a second coordinate set to be desensitized based on the separation of the contour edge pixel point by the neck separation line.
Firstly, the neck of the human body contour is identified to obtain a neck separation line, then the contour edge pixel points are separated by combining the neck separation line, and the region, facing the head corresponding to the contour edge pixel points, of the neck separation line is determined to obtain a second coordinate set to be desensitized. I.e. the present solution will take the head area of the user as the desensitising area.
S536, performing binarization processing on all pixel points in the second coordinate set to be desensitized according to a preset binarization scheme to obtain a desensitized image frame and a corresponding video segment.
After the second coordinate set to be desensitized is obtained, the scheme carries out binarization processing on all pixel points in the second coordinate set to be desensitized according to a preset binarization scheme to obtain desensitized image frames and corresponding video segments.
Referring to fig. 2, a schematic structural diagram of a secure charging device with information synchronization interconnection according to an embodiment of the present invention includes:
the acquisition module is used for enabling the server to send a first acquisition instruction to the corresponding camera end at the charging pile after receiving the charging request sent by the request end, and the camera end responds to the first acquisition instruction to carry out video acquisition and takes the moment of receiving the first acquisition instruction as the initial acquisition moment;
the identification module is used for enabling the server to synchronously send a second acquisition instruction to the camera end after receiving a charging start signal sent by the charging pile, and the camera end responds to the second acquisition instruction to carry out video acquisition and identification and takes the moment of receiving the second acquisition instruction as an identification acquisition moment;
the determining module is used for enabling the camera end to determine an identification monitoring area in the video acquired at the second acquisition time according to the position of the charging pile, if the identification monitoring area is judged to have a corresponding identification target, the identification monitoring area is sent to the server, and the corresponding time is used as a reminding acquisition time;
the analysis module is used for enabling the server to perform safety analysis by combining the charging pile information after receiving the identification target, and if the safety reminding condition is judged to be met, starting to synchronously identify the video of the monitoring area to the request terminal based on the reminding acquisition time;
The synchronization module is used for enabling the server to acquire the video acquired by the camera end if receiving the video calling instruction of the user, and synchronizing the video to the request end after processing the video based on the starting acquisition time, the identification acquisition time, the ending acquisition time and/or the reminding acquisition time.
The embodiment of the invention provides electronic equipment, which comprises: a processor, a memory and a computer program; wherein the method comprises the steps of
And a memory for storing the computer program, which may also be a flash memory (flash). Such as application programs, functional modules, etc. implementing the methods described above.
And the processor is used for executing the computer program stored in the memory to realize each step executed by the equipment in the method. Reference may be made in particular to the description of the embodiments of the method described above.
In the alternative, the memory may be separate or integrated with the processor.
When the memory is a device separate from the processor, the apparatus may further include:
and the bus is used for connecting the memory and the processor.
The present invention also provides a storage medium having stored therein a computer program for implementing the methods provided by the various embodiments described above when executed by a processor.
The storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media can be any available media that can be accessed by a general purpose or special purpose computer. For example, a storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short). In addition, the ASIC may reside in a user device. The processor and the storage medium may reside as discrete components in a communication device. The storage medium may be read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tape, floppy disk, optical data storage device, etc.
The present invention also provides a program product comprising execution instructions stored in a storage medium. The at least one processor of the device may read the execution instructions from the storage medium, the execution instructions being executed by the at least one processor to cause the device to implement the methods provided by the various embodiments described above.
In the above embodiments of the terminal or the server, it should be understood that the processor may be a central processing unit (english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, digital signal processors (english: digital Signal Processor, abbreviated as DSP), application specific integrated circuits (english: application Specific Integrated Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (12)

1. The safe charging method for information synchronous interconnection is characterized by comprising the following steps of:
the method comprises the steps that after a server receives a charging request sent by a request end, a first acquisition instruction is sent to a camera end corresponding to a charging pile, the camera end responds to the first acquisition instruction to perform video acquisition, and the moment of receiving the first acquisition instruction is taken as an initial acquisition moment;
the server synchronizes to the camera end to send a second acquisition instruction after receiving a charging start signal sent by the charging pile, the camera end responds to the second acquisition instruction to carry out video acquisition and identification, and the moment of receiving the second acquisition instruction is taken as an identification acquisition moment;
the camera shooting end determines a recognition monitoring area according to the video acquired by the position of the charging pile at the second acquisition time, if the recognition monitoring area is judged to have a corresponding recognition target, the recognition target is sent to the server, and the corresponding time is used as a reminding acquisition time;
the server performs safety analysis by combining the charging pile information after receiving the identification target, and if the safety reminding condition is judged to be met, starts to synchronously identify the video of the monitoring area to the request terminal based on the reminding acquisition time and controls the reminding device to respond;
if the server receives a video calling instruction of a user, acquiring a video acquired by a camera end, and synchronizing the video after processing the video based on the initial acquisition time, the identification acquisition time, the termination acquisition time and/or the reminding acquisition time to a request end;
The camera end determines an identification monitoring area in the video acquired at the second acquisition time according to the position of the charging pile, if the identification monitoring area is judged to have a corresponding identification target, the identification monitoring area is sent to a server, and the corresponding time is used as a reminding acquisition time, and the camera end comprises:
the camera end comprises at least one camera and an edge computing gateway connected with the camera;
the edge computing gateway determines a corresponding preset camera according to the position of the charging pile, obtains a target pixel point which is a first preset pixel value in the video of the camera, obtains a plurality of recognition contour ranges, and sequences the plurality of recognition contour ranges to obtain corresponding contour numbers;
determining preset numbers corresponding to the corresponding charging piles and the cameras, and selecting an identification contour range of a contour number corresponding to the preset number to be determined as an identification monitoring area;
and if the identification monitoring area is judged to have a corresponding identification target, sending the identification target to the server, and taking the corresponding time as a reminding acquisition time, wherein the method comprises the following steps of:
acquiring an image in a recognition monitoring area, and determining a recognition target based on OPENCV recognition, wherein the recognition target at least comprises a person;
if a person exists in the identification monitoring area or the outline of the person is judged to be smaller than the preset distance from the outline of the vehicle, the corresponding identification target is sent to the server, and the server takes the moment of receiving the identification target as the reminding acquisition moment;
The server carries out safety analysis by combining charging pile information after receiving the identification target, and if judging that the safety reminding condition is reached, synchronously identifying the video of the monitoring area to the request terminal based on the reminding acquisition time, and comprises the following steps:
the server acquires vibration information of the charging gun in the charging pile information after receiving the identification target, and judges that a safety reminding condition is met if the vibration information reaches a preset vibration value;
and intercepting the video image in the identification monitoring area after the reminding acquisition time and synchronizing the video image with the request end.
2. The method for securely charging information synchronization interconnection of claim 1,
the edge computing gateway determines a corresponding preset camera according to the position of the charging pile, obtains a target pixel point which is a first preset pixel value in the video of the camera, obtains a plurality of recognition contour ranges, and sequences the plurality of preset recognition contour ranges to obtain corresponding contour numbers, and the edge computing gateway comprises:
the edge computing gateway determines a preset camera corresponding to the charging piles according to a preset corresponding table, wherein the preset corresponding table is provided with a corresponding relation between each charging pile and the camera and a preset number corresponding to the charging pile and the camera;
Identifying target pixel points which are first preset pixel values in the video of the camera, and classifying all target pixel points which are directly adjacent or indirectly adjacent through other target pixel points into a first pixel point set;
reserving first pixel point sets with the number larger than or equal to the number of the preset pixel points, sequencing the reserved first pixel point sets to obtain a first sequence, and adding corresponding contour numbers to preset identification contours formed by each first pixel point set according to the sequence of the first sequence.
3. The method for securely charging information synchronization interconnection according to claim 2, wherein,
reserving first pixel point sets with the number larger than or equal to the number of preset pixel points, sequencing the reserved first pixel point sets to obtain a first sequence, adding corresponding contour numbers to preset identification contours formed by each first pixel point set according to the sequence of the first sequence, and comprising the following steps:
determining an image ordering direction corresponding to the video, and determining a corresponding coordinate ordering direction according to the image ordering direction;
and determining the center coordinates of the corresponding preset identification contour according to the coordinates of each target pixel point in the first pixel point set, determining the ordering coordinate value corresponding to the center coordinates based on the coordinate ordering direction, and ordering the first pixel point set corresponding to the ordering coordinate value according to the coordinate ordering direction to obtain a first sequence.
4. The method for securely charging information synchronization interconnection of claim 3,
the determining the image ordering direction corresponding to the video, and determining the corresponding coordinate ordering direction according to the image ordering direction, includes:
carrying out coordinated processing on images in the video to obtain a corresponding coordinate system, wherein the coordinate system comprises coordinate axes and coordinate values corresponding to each pixel point;
and if the image ordering direction corresponding to the video is judged to correspond to the coordinate axis of the abscissa, taking the abscissa axis as the coordinate ordering direction, and if the image ordering direction corresponding to the video is judged to correspond to the coordinate axis of the ordinate, taking the ordinate axis as the coordinate ordering direction.
5. The method for securely charging information synchronization interconnection of claim 4,
the method for determining the center coordinates of the corresponding preset recognition contour according to the coordinates of each target pixel point in the first pixel point set, determining the ordering coordinate value corresponding to the center coordinates based on the coordinate ordering direction, ordering the first pixel point set corresponding to the ordering coordinate value according to the coordinate ordering direction to obtain a first sequence, and comprises the following steps:
if the coordinate ordering direction is the abscissa axis, acquiring the abscissa of each target pixel point in the first pixel point set, determining an abscissa maximum value and an abscissa minimum value, and ordering coordinate values of the central coordinate according to the abscissa maximum value and the abscissa minimum value;
If the coordinate ordering direction is the abscissa axis forward direction, ordering the first pixel point set from small to large according to the ordering coordinate value to obtain a first sequence;
and if the coordinate ordering direction is the negative direction of the abscissa axis, ordering the first pixel point set according to the ordering coordinate value from big to small to obtain a first sequence.
6. The method for securely charging information synchronization interconnection of claim 4,
the method for determining the center coordinates of the corresponding preset recognition contour according to the coordinates of each target pixel point in the first pixel point set, determining the ordering coordinate value corresponding to the center coordinates based on the coordinate ordering direction, ordering the first pixel point set corresponding to the ordering coordinate value according to the coordinate ordering direction to obtain a first sequence, and comprises the following steps:
if the coordinate ordering direction is the ordinate axis, acquiring the ordinate of each target pixel point in the first pixel point set, determining an ordinate maximum value and an ordinate minimum value, and ordering coordinate values of the central coordinate according to the ordinate maximum value and the ordinate minimum value;
if the coordinate ordering direction is the positive direction of the ordinate axis, ordering the first pixel point set from small to large according to the ordering coordinate value to obtain a first sequence;
And if the coordinate ordering direction is the opposite direction of the ordinate axis, ordering the first pixel point set according to the ordering coordinate value from big to small to obtain a first sequence.
7. The method for securely charging information synchronization interconnection of claim 1,
the server acquires the video acquired by the camera end if receiving a video calling instruction of a user, and synchronizes to the request end after processing the video based on the starting acquisition time, the identifying acquisition time, the ending acquisition time and/or the reminding acquisition time, and the method comprises the following steps:
if a video calling instruction of a user is received, a server acquires a video acquired by a camera end to generate a corresponding video time axis, and determines the moment corresponding to each image frame in the video time axis;
determining corresponding image frames as separation frames according to the initial acquisition time, the identification acquisition time, the termination acquisition time and/or the reminding acquisition time, and displaying the separation frames in a node form in a video time axis;
labels corresponding to the starting acquisition time, the identifying acquisition time, the ending acquisition time and/or the reminding acquisition time are respectively added in the video time axis, and the corresponding desensitization treatment is carried out on the videos corresponding to different time axes according to the different labels;
And synchronizing the desensitized video and the corresponding video axis to the request end.
8. The method for securely charging information synchronization interconnection of claim 7,
the desensitization processing of the corresponding video of different time axes according to the different labels comprises the following steps:
if the image frames of the corresponding video segments in the time axis are judged to be corresponding identification acquisition moments, acquiring the human body contours corresponding to the corresponding image frames, and acquiring contour edge pixel points corresponding to the human body contours;
determining coordinates of all pixel points to be desensitized in the human body contour according to the contour edge pixel points to obtain a first coordinate set to be desensitized;
and carrying out binarization processing on all pixel points in the first coordinate set to be desensitized according to a preset binarization scheme to obtain desensitized image frames and corresponding video segments.
9. The method for securely charging information synchronization interconnection of claim 8,
the desensitization processing of the corresponding video of different time axes according to the different labels comprises the following steps:
if the image frames of the corresponding video segments in the time axis are judged to be corresponding reminding acquisition time, acquiring the human body contour corresponding to the corresponding image frames, and acquiring contour edge pixel points corresponding to the human body contour;
Identifying the neck of the human body contour to obtain a neck separation line, separating contour edge pixel points based on the neck separation line, and determining the area of the neck separation line facing the head corresponding to the contour edge pixel points to obtain a second coordinate set to be desensitized;
and carrying out binarization processing on all pixel points in the second coordinate set to be desensitized according to a preset binarization scheme to obtain a desensitized image frame and a corresponding video segment.
10. Information synchronization interconnection's safe charging device, its characterized in that includes:
the acquisition module is used for enabling the server to send a first acquisition instruction to the corresponding camera end at the charging pile after receiving the charging request sent by the request end, and the camera end responds to the first acquisition instruction to carry out video acquisition and takes the moment of receiving the first acquisition instruction as the initial acquisition moment;
the identification module is used for enabling the server to synchronously send a second acquisition instruction to the camera end after receiving a charging start signal sent by the charging pile, and the camera end responds to the second acquisition instruction to carry out video acquisition and identification and takes the moment of receiving the second acquisition instruction as an identification acquisition moment;
the determining module is used for enabling the camera end to determine an identification monitoring area in the video acquired at the second acquisition time according to the position of the charging pile, if the identification monitoring area is judged to have a corresponding identification target, the identification monitoring area is sent to the server, and the corresponding time is used as a reminding acquisition time;
The analysis module is used for enabling the server to perform safety analysis by combining the charging pile information after receiving the identification target, and if the safety reminding condition is judged to be met, starting to synchronously identify the video of the monitoring area to the request terminal based on the reminding acquisition time;
the synchronization module is used for enabling the server to acquire the video acquired by the camera end if receiving a video calling instruction of the user, and synchronizing the video to the request end after processing the video based on the initial acquisition time, the identification acquisition time, the termination acquisition time and/or the reminding acquisition time;
the camera end determines an identification monitoring area in the video acquired at the second acquisition time according to the position of the charging pile, if the identification monitoring area is judged to have a corresponding identification target, the identification target is sent to the server, and the corresponding time is used as a reminding acquisition time, and the camera end comprises:
the camera end comprises at least one camera and an edge computing gateway connected with the camera;
the edge computing gateway determines a corresponding preset camera according to the position of the charging pile, obtains a target pixel point which is a first preset pixel value in the video of the camera, obtains a plurality of recognition contour ranges, and sequences the plurality of recognition contour ranges to obtain corresponding contour numbers;
Determining preset numbers corresponding to the corresponding charging piles and the cameras, and selecting an identification contour range of a contour number corresponding to the preset number to be determined as an identification monitoring area;
if the identification monitoring area is judged to have a corresponding identification target, the identification target is sent to a server, and the corresponding time is used as a reminding acquisition time, and the method comprises the following steps:
acquiring an image in a recognition monitoring area, and determining a recognition target based on OPENCV recognition, wherein the recognition target at least comprises a person;
if a person exists in the identification monitoring area or the outline of the person is judged to be smaller than the preset distance from the outline of the vehicle, the corresponding identification target is sent to the server, and the server takes the moment of receiving the identification target as the reminding acquisition moment;
the server carries out safety analysis by combining charging pile information after receiving the identification target, and if judging that the safety reminding condition is met, synchronously identifying the video of the monitoring area to the request terminal based on the reminding acquisition time, and comprises the following steps:
the server acquires vibration information of the charging gun in the charging pile information after receiving the identification target, and judges that a safety reminding condition is met if the vibration information reaches a preset vibration value;
and intercepting the video image in the identification monitoring area after the reminding acquisition time and synchronizing the video image with the request end.
11. An electronic device, comprising: a memory, a processor and a computer program stored in the memory, the processor running the computer program to perform the method of any one of claims 1 to 9.
12. A storage medium having stored therein a computer program for implementing the method of any of claims 1 to 9 when executed by a processor.
CN202311146180.XA 2023-09-07 2023-09-07 Safe charging method and charging device for information synchronous interconnection Active CN116890668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311146180.XA CN116890668B (en) 2023-09-07 2023-09-07 Safe charging method and charging device for information synchronous interconnection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311146180.XA CN116890668B (en) 2023-09-07 2023-09-07 Safe charging method and charging device for information synchronous interconnection

Publications (2)

Publication Number Publication Date
CN116890668A CN116890668A (en) 2023-10-17
CN116890668B true CN116890668B (en) 2023-11-28

Family

ID=88311075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311146180.XA Active CN116890668B (en) 2023-09-07 2023-09-07 Safe charging method and charging device for information synchronous interconnection

Country Status (1)

Country Link
CN (1) CN116890668B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355858A (en) * 2016-09-07 2017-01-25 福建艾思科新能源科技有限公司 Data communication method of charging pile monitoring system
CN110774919A (en) * 2019-12-02 2020-02-11 广州供电局有限公司 Charging pile monitoring system
WO2020151084A1 (en) * 2019-01-24 2020-07-30 北京明略软件系统有限公司 Target object monitoring method, apparatus, and system
CN111475675A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video processing system
WO2020199480A1 (en) * 2019-04-03 2020-10-08 平安科技(深圳)有限公司 Body movement recognition method and device
WO2020222722A1 (en) * 2019-05-02 2020-11-05 Антон Валэрийовыч РЭМИЗ Alarm device
CN112211496A (en) * 2019-07-09 2021-01-12 杭州萤石软件有限公司 Monitoring method and system based on intelligent door lock and intelligent door lock
CN216610932U (en) * 2021-12-25 2022-05-27 绿能慧充数字技术有限公司 Self-recognition intelligent charging pile and charging pile management system
CN115131914A (en) * 2022-06-29 2022-09-30 东风汽车有限公司东风日产乘用车公司 Charging port cover control method, device, equipment and storage medium
CN116112636A (en) * 2021-11-09 2023-05-12 青岛海尔科技有限公司 Video acquisition method, device, system, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355858A (en) * 2016-09-07 2017-01-25 福建艾思科新能源科技有限公司 Data communication method of charging pile monitoring system
WO2020151084A1 (en) * 2019-01-24 2020-07-30 北京明略软件系统有限公司 Target object monitoring method, apparatus, and system
WO2020199480A1 (en) * 2019-04-03 2020-10-08 平安科技(深圳)有限公司 Body movement recognition method and device
WO2020222722A1 (en) * 2019-05-02 2020-11-05 Антон Валэрийовыч РЭМИЗ Alarm device
CN112211496A (en) * 2019-07-09 2021-01-12 杭州萤石软件有限公司 Monitoring method and system based on intelligent door lock and intelligent door lock
CN110774919A (en) * 2019-12-02 2020-02-11 广州供电局有限公司 Charging pile monitoring system
CN111475675A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video processing system
CN116112636A (en) * 2021-11-09 2023-05-12 青岛海尔科技有限公司 Video acquisition method, device, system, electronic equipment and storage medium
CN216610932U (en) * 2021-12-25 2022-05-27 绿能慧充数字技术有限公司 Self-recognition intelligent charging pile and charging pile management system
CN115131914A (en) * 2022-06-29 2022-09-30 东风汽车有限公司东风日产乘用车公司 Charging port cover control method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN116890668A (en) 2023-10-17

Similar Documents

Publication Publication Date Title
CN110543867B (en) Crowd density estimation system and method under condition of multiple cameras
EP3648448B1 (en) Target feature extraction method and device, and application system
EP3709266A1 (en) Human-tracking methods, apparatuses, systems, and storage media
WO2019011073A1 (en) Human face live detection method and related product
CN110705357A (en) Face recognition method and face recognition device
US11132538B2 (en) Image processing apparatus, image processing system, and image processing method
CN107346419B (en) Iris recognition method, electronic device, and computer-readable storage medium
CN113065568A (en) Target detection, attribute identification and tracking method and system
CN111582118A (en) Face recognition method and device
CN110941992A (en) Smile expression detection method and device, computer equipment and storage medium
CN116890668B (en) Safe charging method and charging device for information synchronous interconnection
CN111062313A (en) Image identification method, image identification device, monitoring system and storage medium
CN111666869B (en) Face recognition method and device based on wide dynamic processing and electronic equipment
JP4496005B2 (en) Image processing method and image processing apparatus
CN111050027B (en) Lens distortion compensation method, device, equipment and storage medium
CN114463776A (en) Fall identification method, device, equipment and storage medium
CN112966575B (en) Target face recognition method and device applied to smart community
CN114022924A (en) Face recognition method and related device
CN110807403B (en) User identity identification method and device and electronic equipment
CN113469138A (en) Object detection method and device, storage medium and electronic equipment
CN108090430B (en) Face detection method and device
CN116912780B (en) Charging monitoring protection method and system based on mode dynamic switching
CN110705345A (en) Pedestrian re-identification method and system based on deep learning
CN110443244A (en) A kind of method and relevant apparatus of graphics process
CN111008564B (en) Non-matching type face image recognition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310016 No. 59, Jiefang East Road, Shangcheng District, Hangzhou, Zhejiang

Applicant after: HANGZHOU POWER SUPPLY COMPANY, STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant after: State Grid Zhejiang Electric Power Co., Ltd. Hangzhou Fuyang district power supply Co.

Applicant after: STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant after: STATE GRID ZHEJIANG ELECTRIC POWER CO., LTD. TAIZHOU POWER SUPPLY Co.

Applicant after: ZHEJIANG University

Address before: No. 809, Central Avenue, Jiaojiang District, Taizhou City, Zhejiang Province, 318001

Applicant before: STATE GRID ZHEJIANG ELECTRIC POWER CO., LTD. TAIZHOU POWER SUPPLY Co.

Applicant before: HANGZHOU POWER SUPPLY COMPANY, STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant before: STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant before: State Grid Zhejiang Electric Power Co., Ltd. Hangzhou Fuyang district power supply Co.

Applicant before: ZHEJIANG University

GR01 Patent grant
GR01 Patent grant