CN109815852A - Smart city event management method, device, computer equipment and storage medium - Google Patents
Smart city event management method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN109815852A CN109815852A CN201910004451.5A CN201910004451A CN109815852A CN 109815852 A CN109815852 A CN 109815852A CN 201910004451 A CN201910004451 A CN 201910004451A CN 109815852 A CN109815852 A CN 109815852A
- Authority
- CN
- China
- Prior art keywords
- video
- image
- video image
- city
- key
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007726 management method Methods 0.000 title claims abstract description 27
- 230000002547 anomalous effect Effects 0.000 claims abstract description 43
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000004590 computer program Methods 0.000 claims description 29
- 238000001514 detection method Methods 0.000 claims description 25
- 230000036541 health Effects 0.000 claims description 22
- 239000000284 extract Substances 0.000 claims description 21
- 238000012795 verification Methods 0.000 claims description 21
- 238000004458 analytical method Methods 0.000 claims description 16
- 238000013527 convolutional neural network Methods 0.000 claims description 14
- 238000003062 neural network model Methods 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 11
- 230000005540 biological transmission Effects 0.000 claims description 7
- 238000000354 decomposition reaction Methods 0.000 claims description 7
- 235000013399 edible fruits Nutrition 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 238000011161 development Methods 0.000 description 7
- 230000018109 developmental process Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 238000012550 audit Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011835 investigation Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000009432 framing Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000003032 molecular docking Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 230000002459 sustained effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
This application involves a kind of smart city event management method, device, computer equipment and storage medium based on artificial intelligence.The described method includes: receiving the city video data that terminal is sent, city video data is subjected to pretreatment and obtains video image;Key video sequence image is obtained from video image according to default frame number;The video identifier that city video data carries is obtained, and corresponding model is called according to video identifier, key video sequence image is input in model and obtains recognition result;City anomalous event is generated according to recognition result and is sent to terminal, and receives the auditing result of terminal feedback, and auditing result is generated by terminal according to city anomalous event;According to auditing result by city anomalous event and corresponding video image storage to corresponding event base.Resource consumption can reduce using this method and improve efficiency.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of smart city event management method, device, calculating
Machine equipment and storage medium.
Background technique
As human society continues to develop, future city carries more and more populations.Also, with Internet technology
Rapid development, requirement and demand of the mankind to world's intelligence are also higher and higher.In order to solve urban development problem, realize that city can
Ensured sustained development development, construction smart city have become the trend of world today's urban development.According to the regulatory requirement in city, usually by
Different municipal sectors are responsible for management work different in city.For example, enterprise operation address verification is managed by the administration for industry and commerce, city
Environmental sanitation is by City Appearance Departments management.However, traditionally either the administration for industry and commerce's verification address or City Appearance Departments check health
It is all to lead to waste of human resource and inefficiency by sending staff to carry out on-the-spot investigation record management dependent event.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of intelligence that can reduce human resources and improve efficiency
Intelligent Urban Event management method, device, computer equipment and storage medium.
A kind of smart city event management method, which comprises
The city video data that terminal is sent is received, the city video data is subjected to pretreatment and obtains video image;
Key video sequence image is obtained from the video image according to default frame number;
The video identifier that the city video data carries is obtained, and corresponding model is called according to the video identifier,
The key video sequence image is input in the model and obtains recognition result;
City anomalous event is generated according to the recognition result and is sent to terminal, and receives the auditing result of terminal feedback,
The auditing result is generated by terminal according to the city anomalous event;
According to the auditing result by the city anomalous event and corresponding video image storage to corresponding event
Library.
The basis presets frame number and obtains key video sequence image packet from the video image in one of the embodiments,
It includes:
Select a frame video image for target video image from the video image according to default frame number, the target view
Frequency image carries first location information;
If recognizing the target video image, there is no texts, comparison video is obtained according to the first location information
Image;The comparison video image is the video image that preparatory acquisition carries the first location information;
The target video image is compared with the comparison video image, if the target video image meets the
One preset requirement, it is determined that key video sequence image;
If recognizing the target critical video image, there are texts, the front and back video of the target video image is obtained
Image;
Compare the target video image and the front and back video image, selection the second preset requirement of satisfaction is determined as closing
Key video image.
The video identifier includes address verification, stream of people's analysis and health detection in one of the embodiments,;
It is described that corresponding model is called according to the video identifier, the key video sequence image is input in the model
Obtaining recognition result includes:
If the video identifier is address verification, image identification model is called to identify that the key video sequence image is known
Not as a result, the recognition result includes identification text;
If the video identifier is the analysis of people's flow point, deep neural network model is called to identify that the key video sequence image obtains
To recognition result, the recognition result includes stream of people's quantity and identification score value;
If the video identifier is health detection, key area is obtained from the key video sequence image, calls convolution
Neural network model obtains recognition result, and the recognition result includes the characteristic value of the key area.
It is described in one of the embodiments, to include: according to recognition result generation city anomalous event
When the recognition result is identification text, the second location information that the key video sequence image carries is obtained;
Company information is obtained according to the second location information, the company information includes enterprise name;
It is the corresponding enterprise name of the identification characters matching from the company information using SOLR inquiry;
The corresponding registered address of the enterprise name is obtained, the third place information is obtained according to the registered address;
The difference value for calculating the second location information and the third place information generates thing according to the difference value
Part;
When the recognition result is people's fluxion amount and identification score value, preset if the identification score value is more than or equal to first
Value, then obtain history stream of people's quantity;
Stream of people's difference is determined according to the history stream of people quantity and stream of people's quantity, and thing is generated according to stream of people's difference
Part;
When the recognition result is the characteristic value of key area, if the characteristic value is more than or equal to the second preset value,
Event is generated according to the characteristic value.
The calling deep neural network model identifies that the key video sequence image is known in one of the embodiments,
Not as a result, the recognition result includes stream of people's quantity and identification score value includes:
Identify that the key video sequence image obtains body local candidate frame;
Stream of people's quantity is determined according to the body local candidate frame;
Obtain the default score value and weight of the body local candidate frame;
Score value is identified according to the default score value and weight calculation.
The calling convolutional neural networks model obtains recognition result, the recognition result in one of the embodiments,
Characteristic value including the key area includes:
The key area is input to fisrt feature and extracts extraction fisrt feature data in network;
The fisrt feature data are input to second feature and extract extraction second feature data in network;
The second feature data are subjected to feature decomposition and obtain characteristic value.
The SOLR inquiry includes accurate matching, fuzzy matching and similarity mode in one of the embodiments,;Institute
Stating using SOLR inquiry is that the corresponding enterprise name of the identification characters matching includes: from the company information
Be that the identification text is matched from the company information using accurate matching, the first matching rate of acquisition and
Corresponding enterprise name;
Be that the identification text is matched from the company information using fuzzy matching, obtain the second matching rate and
Corresponding enterprise name;
Be that the identification text is matched from the company information using similarity mode, obtain third matching rate with
And corresponding enterprise name;
It is matching result that maximum matching rate is selected from first matching rate, the second matching rate, third matching rate, is obtained
The corresponding enterprise name of the matching result.
A kind of smart city incident management device, described device include:
Receiving module pre-processes the city video data for receiving the city video data of terminal transmission
Obtain video image;
Module is obtained, for obtaining key video sequence image from the video image according to default frame number;
Calling module, the video identifier carried for obtaining the city video data, and according to the video identifier tune
With corresponding model, the key video sequence image is input in the model and obtains recognition result;
Generation module is sent to terminal for generating city anomalous event according to the recognition result, receives terminal feedback
Auditing result, the auditing result generates by terminal according to the city anomalous event;
Memory module, for according to the auditing result by the city anomalous event and corresponding video image storage
To corresponding event base.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing
Device realizes smart city event management method described in above-mentioned any one when executing the computer program.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
Smart city event management method described in above-mentioned any one is realized when row.
Above-mentioned smart city event management method, device, computer equipment and storage medium, by receiving terminal transmission
City video data is carried out pretreatment and obtains video image by city video data, so that guarantee gets the video of high quality
Image.Key video sequence image is obtained from video image according to default frame number, the video identifier carried according to city video data
Corresponding model is called, key video sequence image is input in model and obtains recognition result, it is different to generate city according to recognition result
Ordinary affair part stores the accuracy that not only ensure that model identification to corresponding event base after returning to terminal check again, is further without
On-the-spot investigation is manually carried out, human resources is reduced and consumes and improve efficiency.
Detailed description of the invention
Fig. 1 is the application scenario diagram of smart city event management method in one embodiment;
Fig. 2 is the flow diagram of smart city event management method in one embodiment;
Fig. 3 is the flow diagram that key video sequence image step is obtained in one embodiment;
Fig. 4 is the structural block diagram of smart city incident management device in one embodiment;
Fig. 5 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
Smart city event management method provided by the present application, can be applied in application environment as shown in Figure 1.Its
In, terminal 102 is communicated by network with server 104.Server 104 receives the city video data that terminal 102 is sent,
City video data is subjected to pretreatment and obtains video image;Server 104 obtains the video identifier that video data carries, and root
Key video sequence image is obtained from video image according to default frame number;Server 104 calls corresponding model according to video identifier, will
Key video sequence image, which is input in model, obtains recognition result;Server 104 generates city anomalous event hair according to recognition result
Terminal 102 is given, and receives the auditing result of the feedback of terminal 102, auditing result is raw according to city anomalous event by terminal 102
At;Server 104 is according to auditing result by city anomalous event and corresponding video image storage to corresponding event base.Its
In, terminal 102 can be, but not limited to be various personal computers, laptop, smart phone, tablet computer and it is portable can
Wearable device, server 104 can be realized with the server cluster of the either multiple server compositions of independent server.
In one embodiment, it as shown in Fig. 2, providing a kind of smart city event management method, applies in this way
It is illustrated for server in Fig. 1, comprising the following steps:
Step S202 receives the city video data that terminal is sent, and city video data is carried out pretreatment and obtains video
Image.
Wherein, city video data refers to the acquisition equipment video image collected using terminal, for example, can pass through
Acquisition equipment taken at regular intervals driving video on road survey vehicle, is uploaded in the audio video synchronization that will drive a vehicle and connect with acquisition equipment
Terminal.Video image is encoded by terminal again after video image acquisition.Video image is video frame,
Refer to that the single width image frame of minimum unit in image animation, a frame are exactly a secondary static video image.Pretreatment refers to
It is subject to perfect process before analyzing video data to video data.Pretreatment include decoding, segmentation, gray scale adjustment,
The technical treatments such as denoising and sharpening.Decoding is to go back the process of original video image, and gray scale adjustment refers to by changing video image
The method of middle grey scale pixel value improves image quality, so that image is more clear.And since video image is being digitized and is being transmitted across
It would generally be influenced by equipment and external environmental noise interference etc. in journey, and noise is the major reason of image interference.Therefore, it goes
Make an uproar refer to the process of reduce image in noise.And sharpen the clarity or focal length journey for referring to and improving a certain position in image
Degree, so that the color of image specific region is distincter.
Specifically, after server receives the video data of terminal transmission, first by Video Decoder to receiving
Video data, which is decoded, goes back original video image.Method based on temporal segmentation carries out the segmentation of video image, that is, utilizes
Continuity and correlation between video image is adjacent are split.Can by current video frame and background frames subtract each other and obtain
Difference image is obtained, can use the difference between two frames or between multiframe equally to obtain difference image.Video figure line segments
It linear transformation is carried out to video image and completes the pretreatment of video image at rear using Denoising Algorithm to enhancing video figure
Picture.
For example, when the acquisition equipment of terminal is mounted on fixed location, or the acquisition equipment for passing through handheld terminal by user
Acquire the kitchen of fixed dining venue or the video image of household kitchen.When acquisition equipment sends collected video image
After terminal, video image encode by terminal to be compressed into MPEG or the H.26X video data of format and is sent to service
Device.Server receives encode compressed video data after, be first MPEG or H.26X format by decoding
Video data restoration at video image, and by being split to video image, gray scale adjustment, denoising and sharpen etc. pre- place
Reason obtains the video image of clear high quality.
Step S204 obtains key video sequence image according to default frame number from video image.
Wherein, default frame number refers to the numerical value for presetting and obtaining video image.For example, default frame number is 10, then it is every
A frame video image is obtained every 10 frames.Key video sequence image refers to the video image met the requirements in video image collected,
For example, if video image collected is to come out for address verification in order to accurately identify text, key video sequence figure
As needing to be the video image for possessing highest accuracy of identification.If video image collected is analyzed for the stream of people, crucial
Video image is the video image for needing to carry out the place of stream of people's analysis.If video image collected is for health detection
, then key video sequence image is then the video image for having rubbish.
Specifically, after server receives the video data of terminal transmission, server pre-processes video data
Video image is got, server obtains key video sequence image according to pre-set frame number from video image.
Step S206, obtains the video identifier that city video data carries, and calls corresponding model according to video identifier, will
Key video sequence image, which is input in model, obtains recognition result.
Wherein, video identifier refers to and is used to show video type for what video data carried when terminal uploaded videos data
Mark.Video identifier includes address verification, stream of people's analysis and health detection.For example, when the acquisition equipment of terminal is by acquisition
Video image is sent to terminal, and terminal carries out after coding gets video data video image, and terminal can be by the video counts
According to being sent to server.Terminal user opens corresponding smart city event management system, Yong Hudian by mouse or touch screen
Hit upload perhaps send button when the corresponding video identifier of the video data or autonomous defeated selected according to the video identifier of pop-up
Enter the corresponding video identifier of video data, again uploads to video data in the system after the completion of video identifier selection.And model
Be then trained in advance by server it is built-up.
Specifically, when server receive terminal transmission video data, to video data carry out pretreatment get view
Frequency image, server after obtaining key video sequence image in video image, obtain video data and take according to pre-set frame number
The video identifier of band.Since key video sequence image is the video image met the requirements in video image, and video image be by
Video data pretreatment gained, therefore the video identifier of key video sequence image is the video mark that corresponding video data carries
Know.Server calls corresponding model according to the video identifier that video data carries, and key video sequence image is input to tune
In model, key video sequence image is identified to obtain recognition result by model.
Step S208 generates city anomalous event according to recognition result and is sent to terminal, and receives the audit of terminal feedback
As a result, auditing result is generated by terminal according to city anomalous event.
Step S210, according to auditing result by city anomalous event and corresponding video image storage to corresponding event
Library.
Wherein, event includes event description, event address and event tag, and event description refers to according to recognition result
Whole description explanation, event address refers to that, by the geographical location in the video image of model recognition detection, event tag is
The description of the result according to obtained by recognition result.
Specifically, after server by utilizing model obtains recognition result, preset rules are obtained, are tied according to preset rules and identification
Fruit generates corresponding event description, event address and event tag.Wherein, preset rules refer to according to different recognition results
Generate the service logic of different event.For example, by taking health detection as an example, if by knowing obtained in key video sequence image input model
Other result be rubbish, then in the case where recognition result is rubbish according to preset rules generate event description be dumping not
Processing, and (the Location Based Service, based on shifting of the LBS according to entrained by the video data that device end is sent
Dynamic location-based service) information determines that event address is the area XX, the city the XX road XXX XX, event tag is rubbish.
After server gets the event description of event, event address and event tag, by the corresponding video figure of the event
Picture and event return to terminal.User further confirms that according to the event description, event address and the event tag that carry in event
Whether the recognition result of model recognition detection gone out is correct, if correctly, returning to auditing result to server.Server according to
The mind received deteriorates result and refers to the corresponding video image of event and event storage in corresponding event base.Further,
If auditing result is that confirmation is correct, by video image and the corresponding event base of storage value, for example, if the event and thing
The corresponding video image verification address of part is then stored into industrial and commercial event base, if the event and the corresponding video figure of event
It seem health detection, health detection belongs to City Appearance Departments and managed, then extremely by the event and corresponding video image storage
Appearance of the city event base.
Further, city anomalous event is the anomalous event according to determined by event description.If server is according to thing
Event description in part is confirmed as anomalous event, then stores the anomalous event to corresponding anomalous event and manage library.For example, with
For address verification, anomalous event is through the obtained operation address of model recognition detection and the inconsistent event in registered address
For anomalous event, then the event is stored into industrial and commercial anomalous event library.Server periodically will be in event base and anomalous event library
Event push to second terminal.Wherein, second terminal refers to the government affairs department system docked with server, including City Appearance Departments
System and the administration for industry and commerce's system etc..For example, if the event is health detection and is stored in appearance of the city event base, and periodically by the appearance of the city
Event and video image in event base are pushed to City Appearance Departments system, after City Appearance Departments receive push, according to event
Concrete condition carries out corresponding rectification measure.
In above-mentioned smart city event management method, the city video data sent by receiving terminal, by city video
Data carry out pretreatment and obtain video image, so that guarantee gets the video image of high quality.According to default frame number from video
Key video sequence image is obtained in image, corresponding model is called according to the video identifier that video data carries, by key video sequence figure
Recognition result is obtained as being input in model, is generated after city anomalous event returns to terminal check according to recognition result and is stored again
To corresponding event base, the accuracy of model identification not only ensure that, it is not necessary to manually carry out on-the-spot investigation, reduce manpower money
Source consumes and improves efficiency.
In one embodiment, as shown in figure 3, obtaining key video sequence image from video image according to default frame number and including
Following steps:
Step S302 selects a frame video image for target video image, target according to default frame number from video image
Video image carries first location information.
Wherein, target video image refers to the video image according to acquired in default frame number.That is, target video figure
As belonging to video image, video image includes target video image.The first location information that target video image carries is LBS
Information refers to and obtains mobile terminal user by the radio communication network or external positioning GPS mode of telecommunications mobile operator
Location information (geographical coordinate), that is, geographical location when target video image is collected.
Specifically, video data is carried out pretreatment and obtains video image by server, video image can for a frame or
Multiframe.When video image is multiframe, obtaining a frame video image according to default frame number is target video image.For example, the view
For frequency according to there is 30 frame video images, presetting frame number is 5, then obtains a frame video image every 5 frames.Also, video image is taken
Band LBS information, it is thus determined that the video image for target video image equally carries LBS information.
Step S304, if recognizing target video image, there is no texts, obtain comparison view according to first location information
Frequency image;Comparison video image is the video image that preparatory acquisition carries first location information.
Wherein, comparison video image refers to and meets the requirements in advance in target location acquisition for carrying out with target video image
The video image of comparison carries identical location information, i.e. first location information with target video image.For example, with health inspection
For survey, acquisition equipment acquisition is the video image that this needs to carry out the street XX of hygienic detection, then according to video image
The LBS acquisition of information of carrying is in advance comparison video image in the street XX clean video image collected.
Specifically, it after server gets target video image according to default frame number, is identified using image recognition technology
It whether there is text in the target video image, there is no texts if recognizing the video text, that is, indicate the video data not
It is for address verification.The LBS information for then obtaining the target video image, the comparison video figure prestored according to LBS acquisition of information
Picture.Alternatively, server can judge whether the video data is used for address verification according to the video identifier that video data carries, if
Video identifier is not address verification, then it represents that it is not intended to address verification, equally obtains the LBS information of the target video image,
The comparison video image prestored according to LBS acquisition of information.
Target video image is compared, if target video image meets first by step S306 with comparison video image
Preset requirement, it is determined that key video sequence image.
Specifically, after server gets comparison video image according to LBS, video image and target video image are carried out
Whether the multilevel iudge target video image meets the first preset requirement of key video sequence image, if meeting key video sequence image
First preset requirement then determines that the target video image is key video sequence image.
If target video image is analyzed for the stream of people, target video image is compared with contrast images, passes through ratio
Compared with the angle of identical building in the two image determine the target video image shooting angle it is whether consistent with contrast images or
Whether the similarity of person's angle is more than or equal to preset value, if so, determining that the target video image is key video sequence image.If should
The shooting angle of target video image and the similarity of contrast images are less than preset value, then it represents that the target video image is unsatisfactory for
Key video sequence image for stream of people's analysis.That is target video image shooting angle selected by server is deviated,
Front and back video image and comparison can be regarded by obtaining to obtain its front and back video image on the basis of the target video image
It is key video sequence image that frequency image, which is compared the video image for obtaining and meeting the requirements,.Wherein, the number of front and back video image is obtained
Amount is determined by actual conditions.Further, however, it is determined that the demand to the analysis of certain target location someone's flow point, then server can be first
The LBS information of the target location is obtained, then obtains the video image equal with LBS from video image, relatively more accessed
Video image determines key video sequence image.
If target video image is compared judgement target with contrast images for health detection by target video image
Whether with the presence of rubbish in video image, if with the presence of rubbish, it is determined that the target video image is key video sequence image, if not having
With the presence of rubbish, then it represents that the target video image is consistent with clean contrast images, is unsatisfactory for the key for health detection
The requirement of video image, then available other target video images are compared again.
Step S308, if recognizing target critical video image, there are texts, obtain the front-and rear-view of target video image
Frequency image.
Step S310, the target video image and front and back video image, selection meet the second preset requirement really
It is set to key video sequence image.
Wherein, front and back video image includes preceding video image and rear video image, is referred on the basis of certain video image, should
Video image before video image is preceding video image, and the video image after the video image is rear video image.Specifically
Ground, if recognizing the video text, there is no texts, that is, indicate that the video data is not intended to address verification, then obtain the target
The front and back video image of video image compares target video image and the highest video figure of front and back video image selection accuracy of identification
As being key video sequence image.Accuracy of identification refers to the integrality of captured text, for example, wherein a frame video image takes
Text be safety, in addition text captured by a frame be safety bank, then select the corresponding video image of safety bank be pass
Key video image.
In one embodiment, step S206 calls corresponding model according to video identifier, and key video sequence image is inputted
If obtained into model recognition result include: video identifier be address verification, call image identification model identify key video sequence
Image obtains recognition result, and recognition result includes identification text;If video identifier is the analysis of people's flow point, deep neural network is called
Model identification key video sequence image obtains recognition result, and recognition result includes stream of people's quantity and identification score value;If video identifier is
Health detection, then obtain key area from key video sequence image, and convolutional neural networks model is called to obtain recognition result, identifies
It as a result include the characteristic value of key area.
Wherein, since video identifier includes address verification, stream of people's analysis and health detection.In order to guarantee model identification inspection
The accuracy for surveying result constructs different models for different demand training.I.e. for address verification training for verifying ground
Deep neural network model can be used in the image identification model of location, image identification model.It is used for for the building of stream of people's analyzing and training
Analyze the deep neural network model of the stream of people.Training building is detected for detecting the convolutional neural networks mould of health for health
Type.Key area refers to the partial video image intercepted from key video sequence image, such as recognizing key video sequence image is to deposit
In rubbish, then it is key that the video image that piece of region of rubbish will be present from key video sequence image, which is individually intercepted and come out,
Region.
Specifically, server obtains the video identifier that video data carries, and calls corresponding model according to video identifier, will
The key video sequence image corresponding with video data got is input in the model of calling.For example, if video identifier is address core
It looks into, then image identification model identification key video sequence image is called to obtain identification text;If video identifier is the analysis of people's flow point, call
Deep neural network model identification key video sequence image obtains stream of people's quantity and identification score value;If video identifier is health detection,
Key area is then obtained from key video sequence image, and convolutional neural networks model is called to obtain the characteristic value of the key area.
In one embodiment, it includes: when recognition result is that step S208, which generates city anomalous event according to recognition result,
When identifying text, the second location information that key video sequence image carries is obtained;Company information, enterprise are obtained according to second location information
Industry information includes enterprise name;It is the corresponding enterprise name of identification characters matching from company information using SOLR inquiry;It obtains
The corresponding registered address of enterprise name obtains the third place information according to registered address;Calculate second location information and third position
The difference value of confidence breath generates event according to difference value;When recognition result is people's fluxion amount and identification score value, if identification score value
More than or equal to the first preset value, then history stream of people's quantity is obtained;Stream of people's difference is determined according to history stream of people quantity and stream of people's quantity,
Event is generated according to stream of people's difference;When recognition result is the characteristic value of key area, preset if characteristic value is more than or equal to second
Value then generates event according to characteristic value.History stream of people's quantity indicates stream of people's quantity of target location before this stream of people analysis.
Wherein, second location information refers to the LBS information that key video sequence image carries, since key video sequence image is to meet
It is required that target video image, and Target key frames carry first location information, that is to say, that second location information is equal to full
The first location information of the target video image required enough.Company information refers to the letter that enterprise is filled in the administration for industry and commerce's registration
Breath, including enterprise name and corresponding enterprises registration address etc..Company information is obtained by server with the administration for industry and commerce system docking
Obtained by the industrial and commercial enterprises library taken.SORL is a high-performance, the function services used using Java5 exploitation for search inquiry
Device.It since LBS information is usually by coordinate representation, can be converted mutually between address and coordinate, i.e. the third place information is
Obtained by enterprises registration address conversion in company information.First preset value be it is pre-set identification score value critical value, second
Preset value is the critical value of pre-set characteristic value.First preset value and the second preset value are set according to actual conditions.
Specifically, when server calls image identification model to carry out recognition detection to key video sequence image according to video identifier
The recognition result got is after identifying text, to obtain the LBS information of key video sequence image carrying, as second confidence
Breath.In order to reduce the scope, first according to the LBS information defined area, how much specific defined area determines according to the actual situation.Again
All company informations in defined area, enterprise's letter are obtained from the industrial and commercial enterprises library obtained with the administration for industry and commerce system docking
It needs to include enterprise name and and corresponding enterprises registration address in breath.Server by utilizing SOLR inquiry mode is from company information
For the identification characters matching enterprise name got.By the corresponding enterprises registration address conversion of enterprise name that gets of matching at
LBS information, as the third place information.Second location information and the third place subtract each other and obtain difference between the two
Value proves that the enterprise practical operation address is identical as registered address, operation address is no different if difference value is less than preset range
Often.If the two difference is greater than preset range, prove that the enterprise operation address and registered address be not identical, operation address exists
Abnormal, according to operation address, whether abnormal result generates event.For example, if event description is to manage through campsite exception
Address is abnormal, and event address is the second location information of target video image, that is, the operation address for the enterprise that actually photographed.
It is obtained when server calls deep neural network model to carry out recognition detection to key video sequence image according to video identifier
After the recognition result got is people's fluxion amount and identification score value, server first determine whether identification score value whether be greater than it is equal with first in advance
If value, if identification score value is less than the first preset value, then it represents that recognition result inaccuracy then will be feedbacked to the tune that terminal carries out model
It is whole.If identifying, score value is more than or equal to the first preset value, then it represents that recognition result accurately can be used.When identification score value is more than or equal to the
When one preset value, history stream of people quantity is obtained.Server constructs prediction model according to history stream of people's quantity, pre- by prediction model
The quantity for surveying this stream of people is stream of people's predicted quantity, this practical stream of people's quantity obtained and stream of people's predicted quantity are counted
It calculates and relatively determines stream of people's difference, event is generated according to the difference of the stream of people.For example, if calculating acquisition stream of people's difference is 10, thing
Part description be stream difference be 10, event address is the second location information of target video image, that is, actually photographed into
The target location of pedestrian stream analysis.Wherein, history stream of people's quantity can be carried out one-accumulate and obtain corresponding add up by prediction model
Sequence solves the data matrix and obtains development coefficient and grey actuating quantity, according to development according to the sequence structure data matrix that adds up
Coefficient and grey actuating quantity establish the building that the differential equation completes prediction model.
It is obtained when server calls convolutional neural networks model to carry out recognition detection to key video sequence image according to video identifier
After the recognition result got is the characteristic value of key area, judge whether this feature value is more than or equal to the second preset value, in this reality
It applies in example, the second preset value preferably 80%.That is, then being generated according to this feature value when characteristic value is more than or equal to 80%
Event.For example, event description is that dumping is untreated, and event address is similarly target if characteristic value is more than or equal to 80%
The second location information of video image, that is, the rubbish site that actually photographed, event tag is rubbish.
In one embodiment, it calls deep neural network model identification key video sequence image to obtain recognition result, identifies
As a result include that stream of people's quantity and identification score value specifically include: identification key video sequence image obtains body local candidate frame;According to people
Body part candidate frame determines stream of people's quantity;Obtain the default score value and weight of body local candidate frame;According to default score value and power
Re-computation identifies score value.
Wherein, when deep neural network model identifies key video sequence image, the body local position recognized can be used
Collimation mark note, as body local candidate frame.Also, since different human bodies can be recognized when identification human body, according to difference
Position indicia framing color or label symbol it is different, and set different score values and power in advance for different indicia framing
Weight, weight total value are 1.For example, the frame of label face is than marking weight shared by the frame at trick position high.
Specifically, after the completion of server is by key video sequence image recognition, according to the quantity of body local candidate frame
Determine the quantity of the stream of people.Also, server will be preset by obtaining the candidate corresponding default score value in different human body part and weight
Score value and the cumulative score value obtained of multiplied by weight are to identify score value.
In one embodiment, convolutional neural networks model is called to obtain recognition result, recognition result includes key area
Characteristic value specifically include: by key area be input to fisrt feature extract network in extract fisrt feature data;By the first spy
Sign data are input to second feature and extract extraction second feature data in network;Second feature data are subjected to feature decomposition acquisition
Characteristic value.
Wherein, fisrt feature extracts network and second feature extracts the feature that network is used to extract key area.First is special
Sign extracts network and includes at least a convolutional layer, a pond layer, and second feature, which extracts network, can equally include at least one
Convolutional layer, a pond layer.
Specifically, convolutional neural networks include one-dimensional convolutional neural networks, two-dimensional convolution neural network and Three dimensional convolution mind
Through network.Two-dimensional convolution neural network is commonly applied to the identification of image class, therefore the present embodiment uses two-dimensional convolution neural network.
Also, the basic structure of convolutional neural networks includes two layers, is characterized extract layer and Feature Mapping layer.Server passes through spy first
The fisrt feature for levying extract layer extracts the fisrt feature data that key area is extracted in network, is extracting network by second feature
It extracts and extracts second feature data from fisrt feature data.Second feature data are finally carried out feature decomposition to obtain finally
Characteristic value.
In one embodiment, SOLR inquiry includes accurate matching, fuzzy matching and similarity mode.It is looked into using SOLR
Asking from be the corresponding enterprise name of identification characters matching in company information include: using accurately matching is identification from company information
Text is matched, and the first matching rate and corresponding enterprise name are obtained;It is identification from company information using fuzzy matching
Text is matched, and the second matching rate and corresponding enterprise name are obtained;It is knowledge from company information using similarity mode
Other text is matched, and third matching rate and corresponding enterprise name are obtained;From the first matching rate, the second matching rate, third
It is matching result that maximum matching rate is selected in matching rate, obtains the corresponding enterprise name of matching result.
Specifically, accurate matching, fuzzy matching and similarity mode are three kinds of different inquiry modes of SOLR inquiry.Clothes
The identification text that business device can will acquire carries out the matching of three kinds of inquiry modes respectively, believes first with accurate matching from enterprise
It is to identify the corresponding enterprise name of characters matching and obtain the first matching rate in breath.Secondly utilize fuzzy matching from company information
In for the corresponding enterprise name of identification characters matching and obtain the second matching rate.Finally utilize similarity mode from company information
In for the corresponding enterprise name of identification characters matching and obtain third matching rate.Server compares the first matching rate, second
Selecting the highest corresponding enterprise name of matching rate with rate and third matching rate is final matching results.If any one is looked into
Inquiry mode matches multiple enterprise names, then matching rate corresponding to more multiple enterprise names determines final matching results.For example,
If identifying, text is Chinese safety, and the enterprise name being matched to is Chinese safety bank, then matching rate is 66.66%.If matching
The enterprise name arrived is Chinese safety Co., Ltd, then matching rate is 50%.The Chinese safety bank for selecting matching rate high is most
Whole matching result.
Server can also successively be matched using three kinds of inquiry modes according to the priority of inquiry mode, once it is previous
Inquiry mode successful match then stops matching.Wherein, accurately matching priority is higher than fuzzy matching and is higher than similarity mode.Service
Device carries out the matching of enterprise name first with accurate matching, stops not using fuzzy matching and similarity if success
Match, fuzzy matching is recycled if it fails to match.
It should be understood that although each step in the flow chart of Fig. 2-3 is successively shown according to the instruction of arrow,
These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-3
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps
Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively
It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately
It executes.
In one embodiment, as shown in figure 4, providing a kind of smart city incident management device, comprising: receiving module
402, module 404, calling module 406, generation module 408 and memory module 410 are obtained, in which:
City video data is carried out pretreatment and obtained by receiving module 402 for receiving the city video data of terminal transmission
Take video image.
Module 404 is obtained, for obtaining key video sequence image from video image according to default frame number.
Calling module 406 calls corresponding mould for obtaining the video identifier of video data carrying, and according to video identifier
Key video sequence image is input in model and obtains recognition result by type.
Generation module 408 is sent to terminal for generating city anomalous event according to recognition result, receives terminal feedback
Auditing result, auditing result are generated by terminal according to city anomalous event.
Memory module 410, for according to auditing result by city anomalous event and corresponding video image storage to right
The event base answered.
In one embodiment, module 404 is obtained to be also used to select a frame video from video image according to default frame number
Image is target video image, and target video image carries first location information;If recognizing target video image to be not present
Text then obtains comparison video image according to first location information;Comparison video image is that preparatory acquisition carries first position
The video image of information;Target video image is compared with comparison video image, if target video image meets first in advance
If it is required that, it is determined that it is key video sequence image;If recognizing target critical video image, there are texts, target video figure is obtained
The front and back video image of picture;Compare target video image and front and back video image, selection meets being determined as the second preset requirement
Key video sequence image.
In one embodiment, if it is address verification that calling module 406, which is also used to video identifier, image is called to identify mould
Type identification key video sequence image obtains recognition result, and recognition result includes identification text;If video identifier is the analysis of people's flow point, adjust
Recognition result is obtained with deep neural network model identification key video sequence image, recognition result includes stream of people's quantity and identification point
Value;If video identifier is health detection, key area is obtained from key video sequence image, and convolutional neural networks model is called to obtain
To recognition result, recognition result includes the characteristic value of key area.
In one embodiment, generation module 408 is also used to obtain key video sequence figure when recognition result is identifies text
As the second location information carried;Company information is obtained according to second location information, company information includes enterprise name;It utilizes
SOLR inquiry is the corresponding enterprise name of identification characters matching from company information;The corresponding registered address of enterprise name is obtained,
The third place information is obtained according to registered address;The difference value for calculating second location information and the third place information, according to difference
It is worth generation event;When recognition result is people's fluxion amount and identification score value, if identification score value is more than or equal to the first preset value, obtain
Take history stream of people's quantity;Stream of people's difference is determined according to history stream of people quantity and stream of people's quantity, and event is generated according to stream of people's difference;When
When recognition result is the characteristic value of key area, if characteristic value is more than or equal to the second preset value, event is generated according to characteristic value.
In one embodiment, calling module 406 is also used to identify that key video sequence image obtains body local candidate frame;Root
Stream of people's quantity is determined according to body local candidate frame;Obtain the default score value and weight of body local candidate frame;According to default score value
Score value is identified with weight calculation.
In one embodiment, calling module 406 is also used to for key area being input to mentions in fisrt feature extraction network
Take fisrt feature data;Fisrt feature data are input to second feature and extract extraction second feature data in network;By second
Characteristic carries out feature decomposition and obtains characteristic value.
In one embodiment, generation module 408 be also used to using accurate matching from company information for identification text into
Row matching, obtains the first matching rate and corresponding enterprise name;Using fuzzy matching from company information for identification text into
Row matching, obtains the second matching rate and corresponding enterprise name;It is identification text from company information using similarity mode
It is matched, obtains third matching rate and corresponding enterprise name;From the first matching rate, the second matching rate, third matching rate
Middle to select maximum matching rate be matching result, the corresponding enterprise name of acquisition matching result.
Specific restriction about smart city incident management device may refer to above for smart city incident management
The restriction of method, details are not described herein.Modules in above-mentioned smart city incident management device can be fully or partially through
Software, hardware and combinations thereof are realized.Above-mentioned each module can be embedded in the form of hardware or independently of the place in computer equipment
It manages in device, can also be stored in a software form in the memory in computer equipment, in order to which processor calls execution or more
The corresponding operation of modules.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction
Composition can be as shown in Figure 5.The computer equipment include by system bus connect processor, memory, network interface and
Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment
Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment is for storing video data.The network interface of the computer equipment is used to pass through network with external terminal
Connection communication.To realize a kind of smart city event management method when the computer program is executed by processor.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored with
Computer program, the processor perform the steps of when executing computer program
The city video data that terminal is sent is received, city video data is subjected to pretreatment and obtains video image;
Key video sequence image is obtained from video image according to default frame number;
The video identifier that city video data carries is obtained, and corresponding model is called according to video identifier, key is regarded
Frequency image, which is input in model, obtains recognition result;
City anomalous event is generated according to recognition result and is sent to terminal, and receives the auditing result of terminal feedback, audit
As a result it is generated by terminal according to city anomalous event;
According to auditing result by city anomalous event and corresponding video image storage to corresponding event base.
In one embodiment, it is also performed the steps of when processor executes computer program
Select a frame video image for target video image from video image according to default frame number, target video image is taken
With first location information;
If recognizing target video image, there is no texts, comparison video image is obtained according to first location information;It is right
It is the video image that preparatory acquisition carries first location information than video image;
Target video image is compared with comparison video image, if target video image meets the first preset requirement,
Then it is determined as key video sequence image;
If recognizing target critical video image, there are texts, the front and back video image of target video image is obtained;
Compare target video image and front and back video image, selection the second preset requirement of satisfaction is determined as key video sequence figure
Picture.
In one embodiment, it is also performed the steps of when processor executes computer program
If video identifier is address verification, image identification model identification key video sequence image is called to obtain recognition result,
Recognition result includes identification text;
If video identifier is the analysis of people's flow point, deep neural network model identification key video sequence image is called to obtain identification knot
Fruit, recognition result include stream of people's quantity and identification score value;
If video identifier is health detection, key area is obtained from key video sequence image, calls convolutional neural networks
Model obtains recognition result, and recognition result includes the characteristic value of key area.
In one embodiment, it is also performed the steps of when processor executes computer program
When recognition result is identification text, the second location information that key video sequence image carries is obtained;
Company information is obtained according to second location information, company information includes enterprise name;
It is the corresponding enterprise name of identification characters matching from company information using SOLR inquiry;
The corresponding registered address of enterprise name is obtained, the third place information is obtained according to registered address;
The difference value for calculating second location information and the third place information generates event according to difference value;
When recognition result is people's fluxion amount and identification score value, if identification score value is more than or equal to the first preset value, obtain
History stream of people's quantity;
Stream of people's difference is determined according to history stream of people quantity and stream of people's quantity, and event is generated according to stream of people's difference;
When recognition result is the characteristic value of key area, if characteristic value is more than or equal to the second preset value, according to feature
It is worth generation event.
In one embodiment, it is also performed the steps of when processor executes computer program
Identify that key video sequence image obtains body local candidate frame;
Stream of people's quantity is determined according to body local candidate frame;
Obtain the default score value and weight of body local candidate frame;Score value is identified according to default score value and weight calculation.
In one embodiment, it is also performed the steps of when processor executes computer program
Key area is input to fisrt feature and extracts extraction fisrt feature data in network;
Fisrt feature data are input to second feature and extract extraction second feature data in network;
Second feature data are subjected to feature decomposition and obtain characteristic value.
In one embodiment, it is also performed the steps of when processor executes computer program
It is matched using accurate matching from company information for identification text, obtains the first matching rate and corresponding enterprise
Industry title;
It is matched using fuzzy matching from company information for identification text, obtains the second matching rate and corresponding enterprise
Industry title;
It is matched using similarity mode from company information for identification text, obtains third matching rate and corresponding
Enterprise name;
It is matching result that maximum matching rate is selected from the first matching rate, the second matching rate, third matching rate, obtains matching
As a result corresponding enterprise name.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of when being executed by processor
The city video data that terminal is sent is received, city video data is subjected to pretreatment and obtains video image;
Key video sequence image is obtained from video image according to default frame number;
The video identifier that city video data carries is obtained, and corresponding model is called according to video identifier, key is regarded
Frequency image, which is input in model, obtains recognition result;
City anomalous event is generated according to recognition result and is sent to terminal, receives the auditing result of terminal feedback, audit knot
Fruit is generated by terminal according to city anomalous event;
According to auditing result by city anomalous event and corresponding video image storage to corresponding event base.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Select a frame video image for target video image from video image according to default frame number, target video image is taken
With first location information;
If recognizing target video image, there is no texts, comparison video image is obtained according to first location information;It is right
It is the video image that preparatory acquisition carries first location information than video image;
Target video image is compared with comparison video image, if target video image meets the first preset requirement,
Then it is determined as key video sequence image;
If recognizing target critical video image, there are texts, the front and back video image of target video image is obtained;
Compare target video image and front and back video image, selection the second preset requirement of satisfaction is determined as key video sequence figure
Picture.
In one embodiment, it is also performed the steps of when computer program is executed by processor
If video identifier is address verification, image identification model identification key video sequence image is called to obtain recognition result,
Recognition result includes identification text;
If video identifier is the analysis of people's flow point, deep neural network model identification key video sequence image is called to obtain identification knot
Fruit, recognition result include stream of people's quantity and identification score value;
If video identifier is health detection, key area is obtained from key video sequence image, calls convolutional neural networks
Model obtains recognition result, and recognition result includes the characteristic value of key area.
In one embodiment, it is also performed the steps of when computer program is executed by processor
When recognition result is identification text, the second location information that key video sequence image carries is obtained;
Company information is obtained according to second location information, company information includes enterprise name;
It is the corresponding enterprise name of identification characters matching from company information using SOLR inquiry;
The corresponding registered address of enterprise name is obtained, the third place information is obtained according to registered address;
The difference value for calculating second location information and the third place information generates event according to difference value;
When recognition result is people's fluxion amount and identification score value, if identification score value is more than or equal to the first preset value, obtain
History stream of people's quantity;
Stream of people's difference is determined according to history stream of people quantity and stream of people's quantity, and event is generated according to stream of people's difference;
When recognition result is the characteristic value of key area, if characteristic value is more than or equal to the second preset value, according to feature
It is worth generation event.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Identify that key video sequence image obtains body local candidate frame;
Stream of people's quantity is determined according to body local candidate frame;
Obtain the default score value and weight of body local candidate frame;Score value is identified according to default score value and weight calculation.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Key area is input to fisrt feature and extracts extraction fisrt feature data in network;
Fisrt feature data are input to second feature and extract extraction second feature data in network;
Second feature data are subjected to feature decomposition and obtain characteristic value.
In one embodiment, it is also performed the steps of when computer program is executed by processor
It is matched using accurate matching from company information for identification text, obtains the first matching rate and corresponding enterprise
Industry title;
It is matched using fuzzy matching from company information for identification text, obtains the second matching rate and corresponding enterprise
Industry title;
It is matched using similarity mode from company information for identification text, obtains third matching rate and corresponding
Enterprise name;
It is matching result that maximum matching rate is selected from the first matching rate, the second matching rate, third matching rate, obtains matching
As a result corresponding enterprise name.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application
Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of smart city event management method, which comprises
The city video data that terminal is sent is received, the city video data is subjected to pretreatment and obtains video image;
Key video sequence image is obtained from the video image according to default frame number;
The video identifier that the city video data carries is obtained, and corresponding model is called according to the video identifier, by institute
It states key video sequence image and is input in the model and obtain recognition result;
City anomalous event is generated according to the recognition result and is sent to terminal, and receives the auditing result of terminal feedback, it is described
Auditing result is generated by terminal according to the city anomalous event;
According to the auditing result by the city anomalous event and corresponding video image storage to corresponding event base.
2. being obtained from the video image the method according to claim 1, wherein the basis presets frame number
Key video sequence image includes:
Select a frame video image for target video image from the video image according to default frame number, the target video figure
As carrying first location information;
If recognizing the target video image, there is no texts, comparison video figure is obtained according to the first location information
Picture;The comparison video image is the video image that preparatory acquisition carries the first location information;
The target video image is compared with the comparison video image, if the target video image meets first in advance
If it is required that, it is determined that it is key video sequence image;
If recognizing the target critical video image, there are texts, the front and back video figure of the target video image is obtained
Picture;
Compare the target video image and the front and back video image, selection the second preset requirement of satisfaction is determined as crucial view
Frequency image.
3. the method according to claim 1, wherein the video identifier include address verification, the stream of people analysis with
And health detection;
It is described that corresponding model is called according to the video identifier, the key video sequence image is input in the model and is obtained
Recognition result includes:
If the video identifier is address verification, image identification model is called to identify that the key video sequence image obtains identification knot
Fruit, the recognition result include identification text;
If the video identifier is the analysis of people's flow point, deep neural network model is called to identify that the key video sequence image is known
Not as a result, the recognition result includes stream of people's quantity and identification score value;
If the video identifier is health detection, key area is obtained from the key video sequence image, calls convolutional Neural
Network model obtains recognition result, and the recognition result includes the characteristic value of the key area.
4. according to the method described in claim 3, it is characterized in that, described generate city anomalous event according to the recognition result
Include:
When the recognition result is identification text, the second location information that the key video sequence image carries is obtained;
Company information is obtained according to the second location information, the company information includes enterprise name;
It is the corresponding enterprise name of the identification characters matching from the company information using SOLR inquiry;
The corresponding registered address of the enterprise name is obtained, the third place information is obtained according to the registered address;
The difference value for calculating the second location information and the third place information generates event according to the difference value;
When the recognition result is people's fluxion amount and identification score value, if the identification score value is more than or equal to the first preset value,
Obtain history stream of people quantity;
Stream of people's difference is determined according to the history stream of people quantity and stream of people's quantity, and event is generated according to stream of people's difference;
When the recognition result is the characteristic value of key area, if the characteristic value is more than or equal to the second preset value, basis
The characteristic value generates event.
5. according to the method described in claim 3, it is characterized in that, the calling deep neural network model identifies the key
Video image obtains recognition result, and the recognition result includes stream of people's quantity and identification score value includes:
Identify that the key video sequence image obtains body local candidate frame;
Stream of people's quantity is determined according to the body local candidate frame;
Obtain the default score value and weight of the body local candidate frame;
Score value is identified according to the default score value and weight calculation.
6. according to the method described in claim 3, it is characterized in that, the calling convolutional neural networks model obtains identification knot
Fruit, the recognition result include that the characteristic value of the key area includes:
The key area is input to fisrt feature and extracts extraction fisrt feature data in network;
The fisrt feature data are input to second feature and extract extraction second feature data in network;
The second feature data are subjected to feature decomposition and obtain characteristic value.
7. according to the method described in claim 4, it is characterized in that, SOLR inquiry include accurate matching, fuzzy matching with
And similarity mode;Described inquired using SOLR is the corresponding enterprise name of the identification characters matching from the company information
Include:
It is that the identification text is matched from the company information using accurate matching, obtains the first matching rate and correspondence
Enterprise name;
It is that the identification text is matched from the company information using fuzzy matching, obtains the second matching rate and correspondence
Enterprise name;
It is that the identification text is matched from the company information using similarity mode, obtains third matching rate and right
The enterprise name answered;
It is matching result that maximum matching rate is selected from first matching rate, the second matching rate, third matching rate, described in acquisition
The corresponding enterprise name of matching result.
8. a kind of smart city incident management device, which is characterized in that described device includes:
The city video data is carried out pretreatment acquisition for receiving the city video data of terminal transmission by receiving module
Video image;
Module is obtained, for obtaining key video sequence image from the video image according to default frame number;
Calling module, the video identifier carried for obtaining the city video data, and according to video identifier calling pair
The key video sequence image is input in the model and obtains recognition result by the model answered;
Generation module is sent to terminal for generating city anomalous event according to the recognition result, receives examining for terminal feedback
Core is as a result, the auditing result is generated by terminal according to the city anomalous event;
Memory module, for according to the auditing result by the city anomalous event and corresponding video image storage to right
The event base answered.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 7 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 7 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910004451.5A CN109815852A (en) | 2019-01-03 | 2019-01-03 | Smart city event management method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910004451.5A CN109815852A (en) | 2019-01-03 | 2019-01-03 | Smart city event management method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109815852A true CN109815852A (en) | 2019-05-28 |
Family
ID=66603925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910004451.5A Pending CN109815852A (en) | 2019-01-03 | 2019-01-03 | Smart city event management method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109815852A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401987A (en) * | 2020-02-19 | 2020-07-10 | 北京三快在线科技有限公司 | Catering merchant information management and display method, system, server and storage medium |
CN112580470A (en) * | 2020-12-11 | 2021-03-30 | 北京软通智慧城市科技有限公司 | City visual perception method and device, electronic equipment and storage medium |
CN113139434A (en) * | 2021-03-29 | 2021-07-20 | 北京旷视科技有限公司 | City management event processing method and device, electronic equipment and readable storage medium |
CN113205037A (en) * | 2021-04-28 | 2021-08-03 | 北京百度网讯科技有限公司 | Event detection method and device, electronic equipment and readable storage medium |
CN114004720A (en) * | 2021-10-27 | 2022-02-01 | 软通智慧信息技术有限公司 | Checking method, device, server, system and storage medium |
CN114187156A (en) * | 2021-12-17 | 2022-03-15 | 江西洪都航空工业集团有限责任公司 | Intelligent recognition method for city management affair component under mobile background |
CN114241399A (en) * | 2022-02-25 | 2022-03-25 | 中电科新型智慧城市研究院有限公司 | Event handling method, system, device and storage medium |
CN114723593A (en) * | 2022-03-03 | 2022-07-08 | 车伟红 | Smart city management system based on big data and cloud computing |
CN114898409A (en) * | 2022-07-14 | 2022-08-12 | 深圳市海清视讯科技有限公司 | Data processing method and device |
CN115186881A (en) * | 2022-06-27 | 2022-10-14 | 红豆电信有限公司 | City safety prediction management method and system based on big data |
CN117726195A (en) * | 2024-02-07 | 2024-03-19 | 创意信息技术股份有限公司 | City management event quantity change prediction method, device, equipment and storage medium |
CN118504843A (en) * | 2024-07-17 | 2024-08-16 | 深圳市华傲数据技术有限公司 | Government affair data processing method and device, storage medium and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106210615A (en) * | 2015-04-30 | 2016-12-07 | 北京文安智能技术股份有限公司 | A kind of city management automatic monitoring method, Apparatus and system |
CN107169426A (en) * | 2017-04-27 | 2017-09-15 | 广东工业大学 | A kind of detection of crowd's abnormal feeling and localization method based on deep neural network |
CN107480587A (en) * | 2017-07-06 | 2017-12-15 | 阿里巴巴集团控股有限公司 | A kind of method and device of model configuration and image recognition |
CN108683826A (en) * | 2018-05-15 | 2018-10-19 | 腾讯科技(深圳)有限公司 | Video data handling procedure, device, computer equipment and storage medium |
CN108921130A (en) * | 2018-07-26 | 2018-11-30 | 聊城大学 | Video key frame extracting method based on salient region |
CN109063612A (en) * | 2018-07-19 | 2018-12-21 | 中智城信息技术有限公司 | City intelligent red line management method and machine readable storage medium |
-
2019
- 2019-01-03 CN CN201910004451.5A patent/CN109815852A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106210615A (en) * | 2015-04-30 | 2016-12-07 | 北京文安智能技术股份有限公司 | A kind of city management automatic monitoring method, Apparatus and system |
CN107169426A (en) * | 2017-04-27 | 2017-09-15 | 广东工业大学 | A kind of detection of crowd's abnormal feeling and localization method based on deep neural network |
CN107480587A (en) * | 2017-07-06 | 2017-12-15 | 阿里巴巴集团控股有限公司 | A kind of method and device of model configuration and image recognition |
CN108683826A (en) * | 2018-05-15 | 2018-10-19 | 腾讯科技(深圳)有限公司 | Video data handling procedure, device, computer equipment and storage medium |
CN109063612A (en) * | 2018-07-19 | 2018-12-21 | 中智城信息技术有限公司 | City intelligent red line management method and machine readable storage medium |
CN108921130A (en) * | 2018-07-26 | 2018-11-30 | 聊城大学 | Video key frame extracting method based on salient region |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401987A (en) * | 2020-02-19 | 2020-07-10 | 北京三快在线科技有限公司 | Catering merchant information management and display method, system, server and storage medium |
CN112580470A (en) * | 2020-12-11 | 2021-03-30 | 北京软通智慧城市科技有限公司 | City visual perception method and device, electronic equipment and storage medium |
CN113139434A (en) * | 2021-03-29 | 2021-07-20 | 北京旷视科技有限公司 | City management event processing method and device, electronic equipment and readable storage medium |
CN113205037B (en) * | 2021-04-28 | 2024-01-26 | 北京百度网讯科技有限公司 | Event detection method, event detection device, electronic equipment and readable storage medium |
CN113205037A (en) * | 2021-04-28 | 2021-08-03 | 北京百度网讯科技有限公司 | Event detection method and device, electronic equipment and readable storage medium |
CN114004720A (en) * | 2021-10-27 | 2022-02-01 | 软通智慧信息技术有限公司 | Checking method, device, server, system and storage medium |
CN114187156A (en) * | 2021-12-17 | 2022-03-15 | 江西洪都航空工业集团有限责任公司 | Intelligent recognition method for city management affair component under mobile background |
CN114241399A (en) * | 2022-02-25 | 2022-03-25 | 中电科新型智慧城市研究院有限公司 | Event handling method, system, device and storage medium |
CN114723593A (en) * | 2022-03-03 | 2022-07-08 | 车伟红 | Smart city management system based on big data and cloud computing |
CN115186881A (en) * | 2022-06-27 | 2022-10-14 | 红豆电信有限公司 | City safety prediction management method and system based on big data |
CN114898409A (en) * | 2022-07-14 | 2022-08-12 | 深圳市海清视讯科技有限公司 | Data processing method and device |
CN114898409B (en) * | 2022-07-14 | 2022-09-30 | 深圳市海清视讯科技有限公司 | Data processing method and device |
CN117726195A (en) * | 2024-02-07 | 2024-03-19 | 创意信息技术股份有限公司 | City management event quantity change prediction method, device, equipment and storage medium |
CN117726195B (en) * | 2024-02-07 | 2024-05-07 | 创意信息技术股份有限公司 | City management event quantity change prediction method, device, equipment and storage medium |
CN118504843A (en) * | 2024-07-17 | 2024-08-16 | 深圳市华傲数据技术有限公司 | Government affair data processing method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109815852A (en) | Smart city event management method, device, computer equipment and storage medium | |
CN110659395B (en) | Method, device, computer equipment and storage medium for constructing relational network map | |
CN110472870B (en) | Cashier desk service specification detection system based on artificial intelligence | |
CN111814977B (en) | Method and device for training event prediction model | |
Stec et al. | Forecasting crime with deep learning | |
US11875569B2 (en) | Smart video surveillance system using a neural network engine | |
Li et al. | Long-short term spatiotemporal tensor prediction for passenger flow profile | |
KR20200098875A (en) | System and method for providing 3D face recognition | |
CN112669342B (en) | Training method and device of image segmentation network, and image segmentation method and device | |
CN105069130A (en) | Suspect object prediction method | |
CN109815851A (en) | Kitchen hygiene detection method, device, computer equipment and storage medium | |
US11631165B2 (en) | Repair estimation based on images | |
CN109829072A (en) | Construct atlas calculation and relevant apparatus | |
CN114723843B (en) | Method, device, equipment and storage medium for generating virtual clothing through multi-mode fusion | |
CN110135943A (en) | Products Show method, apparatus, computer equipment and storage medium | |
JP2021520015A (en) | Image processing methods, devices, terminal equipment, servers and systems | |
CN115601710A (en) | Examination room abnormal behavior monitoring method and system based on self-attention network architecture | |
CN110866096A (en) | Intelligent answer control method and device, computer equipment and storage medium | |
CN111400415B (en) | Personnel management method and related device | |
CN116756576A (en) | Data processing method, model training method, electronic device and storage medium | |
CN111652152A (en) | Crowd density detection method and device, computer equipment and storage medium | |
CN112333182B (en) | File processing method, device, server and storage medium | |
CN113298112B (en) | Integrated data intelligent labeling method and system | |
Sarisaray-Boluk et al. | Performance comparison of data reduction techniques for wireless multimedia sensor network applications | |
CN115205738A (en) | Emergency drainage method and system applied to urban inland inundation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |