CN110472458A - A kind of unmanned shop order management method and system - Google Patents

A kind of unmanned shop order management method and system Download PDF

Info

Publication number
CN110472458A
CN110472458A CN201810448311.2A CN201810448311A CN110472458A CN 110472458 A CN110472458 A CN 110472458A CN 201810448311 A CN201810448311 A CN 201810448311A CN 110472458 A CN110472458 A CN 110472458A
Authority
CN
China
Prior art keywords
shop
posture
personnel
human body
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810448311.2A
Other languages
Chinese (zh)
Inventor
王齐
刘高原
林志豪
刘开展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deep Eye Technology (shenzhen) Co Ltd
Original Assignee
Deep Eye Technology (shenzhen) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Eye Technology (shenzhen) Co Ltd filed Critical Deep Eye Technology (shenzhen) Co Ltd
Priority to CN201810448311.2A priority Critical patent/CN110472458A/en
Publication of CN110472458A publication Critical patent/CN110472458A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of unmanned shop order management method and system, a kind of unmanned shop order management method of the present invention includes the following steps: acquisition shop inner video image;Human action identification is carried out to video image, obtains the posture into shop personnel;Posture into shop personnel is analyzed, judges the abnormal behaviour type into shop personnel;According to the abnormal behaviour type into shop personnel, corresponding prompting instruction is issued;It is instructed according to reminding, issues prompting message into shop personnel.The present invention also provides a kind of unmanned shop order management systems, including image capture module, computing module, server and interactive module.A kind of unmanned shop order management method of the present invention, whether belong to abnormal behaviour according to the behavior that video image is judged automatically into shop personnel, and different promptings is issued for different abnormal behaviour automatically, it realizes under conditions of no Field Force management and long-range monitoring, ensures that the safety in shop and order are controllable.

Description

A kind of unmanned shop order management method and system
Technical field
The present invention relates to artificial intelligence fields, more particularly to a kind of unmanned shop order management method and system.
Background technique
With sharply riseing for cost of labor, this artificial retail mode of fixation that removes of unmanned convenience store comes into vogue. Unmanned convenience store bring it is completely new easily and fast, the shopping experience of no pressure, but also bring entirely different pipe simultaneously Reason requires.Under unmanned convenience store's environment, the Shopping Behaviors of consumer are all had changed a lot, corresponding, are needed Corresponding adjustment is made in the management of shop.Due to the daily unmanned operation of unmanned StoreFront, under most times, consumer is met It all can not directly be seeked advice to various problems to salesman.For more daily Shopping Behaviors, consumer can be logical by correlation News media carry out the interaction of the weak projectivities such as text, voice and are resolved, but the emergency situations for being likely encountered, and need very needle Property is designed.Customer is in the storefront if encounter the safety and order problem of various bursts, due to unmanned convenience store one As controlled using stringent disengaging, and area is more narrow, if may cause very serious without reasonable administrative mechanism Consequence.In addition, being also the direction of unmanned shop safety management for the civilized Shopping Behaviors of unmanned StoreFront.
The rise in unattended shop brings new people shop interactive mode, under conditions of no Field Force manages how The safety and controllability for ensureing shop are problems to be solved, and mesh there is no in the industry the system of the parties concerned or device to answer The problem of for this aspect.
Summary of the invention
Based on this, the object of the present invention is to provide a kind of unmanned shop order management methods, can manage in no Field Force Ensure that the safety in shop and order are controllable under conditions of reason.
The present invention realizes by the following method:
A kind of unmanned shop order management method, includes the following steps:
Acquisition shop inner video image;
Human action identification is carried out to video image, obtains the posture into shop personnel;
Posture into shop personnel is analyzed, judges the abnormal behaviour type into shop personnel;
According to the abnormal behaviour type into shop personnel, corresponding prompting instruction is issued;
It is instructed according to reminding, issues prompting message into shop personnel.
A kind of unmanned shop order management method of the present invention, judges automatically the row into shop personnel according to video image Whether to belong to abnormal behaviour, and different promptings is issued for different abnormal behaviour automatically, realized in no Field Force Under conditions of management and long-range monitoring, ensure that the safety in shop and order are controllable.
Further, human action identification is carried out to video image, obtains the posture into shop personnel, comprising: by video figure As input posture mark convolutional neural networks, the node collection into shop personnel's posture is obtained;By node collection and posture node correlation database It is matched, obtains the posture into shop personnel.
Further, human action identification is carried out to video image, obtains the posture into shop personnel, comprising: by video figure It is handled as inputting human testing convolutional neural networks, selects position of human body from image center;Video image is inputted into light stream It calculates convolutional neural networks to be handled, calculates the Optic flow information of full figure;Position of human body and full figure are selected according to image center Information calculates the light stream value of human body;By human body light stream value compared with given threshold, judge hypervelocity row whether is in into shop personnel It walks or runs posture.
Further, human action identification is carried out to video image, obtains the posture into shop personnel, comprising: by video figure It is handled as inputting human testing convolutional neural networks, selects position of human body from image center;Video image is inputted into light stream It calculates convolutional neural networks to be handled, calculates the Optic flow information of full figure;Position of human body and full figure are selected according to image center Information calculates the light stream profile of human body;Human body light stream profile is analyzed, judges whether be in posture of squatting down into shop personnel Or posture of lying down.
Further, human body light stream profile is analyzed, judges whether be in squat down posture or appearance of lying down into shop personnel State, comprising: image expansion is carried out to the light stream profile into shop personnel, forms complete profile diagram;High pass filter is carried out to profile diagram Wave obtains grain details;Contours extract is carried out to profile diagram, obtains the maximum profile of surround the area;By the maximum wheel of surround the area Wide length-width ratio judges whether the posture into shop personnel is in squat down posture or posture of lying down compared with given threshold.
It further, further include following steps: using light stream profile as matching content, to candidate human body frame and to be updated Human body track carries out characteristic matching;Best match is carried out to candidate human body frame and human body track to be updated, human body frame is believed Breath is updated to corresponding human body track;The positional relationship of the different imaging areas of acquisition shop inner video image realizes that human body track exists The automatic switchover of difference camera shooting head region;The human body track of comprehensive different imaging areas, forms the full trace graphics of human body;By human body Full trace graphics are plotted in graph form in the two-dimensional top-down view in shop, by the current location of track in the form of coordinate points It is plotted in the two-dimensional top-down view in shop;The trajectory coordinates point number for counting each region in shop, by the coordinate points number in each region Compared with given threshold, judge whether the region belongs to congestion state.
Further, further include following steps: obtaining relevant environment detection parameters in shop;It is detected and is joined according to relevant environment Number automatically controls corresponding electric appliance equipment.
It further, further include following steps: if relevant environment detection parameters are more than given threshold in shop, into shop Personnel sound an alarm, and indicate that personnel leave shop in shop.
The present invention also provides a kind of unmanned store management systems, comprising: image capture module, computing module, server and Interactive module, the computing module are separately connected with image capture module and server, the server and the interactive module Connection, wherein the computing module includes deep learning network module and image processing module;The server includes order pipe Manage module;
Described image acquisition module, for obtaining shop inner video image;
The deep learning network module obtains the appearance into shop personnel for carrying out human action identification to video image State;
Described image processing module judges the abnormal behaviour into shop personnel for analyzing the posture into shop personnel Type;
The order management module, for issuing corresponding prompting instruction according to the abnormal behaviour type into shop personnel;
The interactive module, for according to instruction is reminded, personnel to issue prompting message into shop.
Further, the deep learning network module includes posture mark convolutional neural networks, the mark convolution mind Through network for analyzing video image, the node collection into shop personnel's posture is obtained, and node collection and posture node are closed Connection library is matched, and obtains the posture into shop personnel.
Further, the deep learning network module includes human testing convolutional neural networks and optical flow computation convolution mind Through network;
The human testing convolutional neural networks select human body position from image center for analyzing video image It sets;
The optical flow computation convolutional neural networks are used to analyze video image, calculate the Optic flow information of full figure, And position of human body and full figure information are selected according to image center, calculate the light stream value and light stream profile of human body;
Described image processing module is also used to analyze light stream value and light stream profile, judges whether be in into shop personnel Hypervelocity walking or the posture or whether in squatting down posture or posture of lying down of running.
It further, further include environmental parameter detection module, the environmental parameter module is connect with the server;
The environmental parameter detection module is sent to server for obtaining relevant environment detection parameters in shop;
The server is also used to by relevant environment parameter values for detection compared with given threshold, if being more than given threshold, Adjust automatically corresponding electric appliance equipment.
In order to better understand and implement, the invention will now be described in detail with reference to the accompanying drawings.
Detailed description of the invention
Fig. 1 is a kind of a kind of unmanned shop order management method flow chart for implementing to exemplify;
Fig. 2 is a kind of flow chart of the step 30 of Fig. 1 corresponding embodiment;
Fig. 3 is a kind of flow chart of the step 30 of Fig. 1 corresponding embodiment;
Fig. 4 is a kind of flow chart of the step 30 of Fig. 1 corresponding embodiment;
Fig. 5 is the flow chart of the step 334 of Fig. 4 corresponding embodiment;
Fig. 6 is crowded state decision flow chart in shop;
Fig. 7 is that schematic diagram is shared in imaging area position;
Fig. 8 is environment parameter control flow chart;
Fig. 9 is a kind of unmanned store management system structural block diagram;
Figure 10 is a kind of structural block diagram of computing module in Fig. 9;
Figure 11 is computing module another kind structural block diagram in Fig. 9;
Figure 12 is a kind of unmanned store management system structural block diagram of embodiment.
Specific embodiment
Referring to Fig. 1, it is a kind of unmanned shop order management method flow chart in an embodiment of the present invention.The management Method is suitable for the various unmanned shops with video monitoring function, especially unmanned convenience store.
In step 10, shop inner video image is obtained.
Video image refers to the sequence of continuous still image, which can be from the monitor video in shop The real-time monitoring images of acquisition.
In step 20, human action identification is carried out to video image, obtains the posture into shop personnel.
The movement of human body refers to the activity or action of body, human action generally comprised walk, run, swinging arm, squatting down, The process of sitting, jump etc. in these daily lifes, the purpose of human action identification be, successfully realize motion tracking, On the basis of feature extraction, the human action characteristic parameter obtained by analysis, automatic identification human action type can pass through Various technologies such as image processing, pattern-recognition, machine learning, computer vision are realized.It mainly include into shop into shop personnel The customer of shopping is also possible to staff, or into such as to all other men person in shop.It is identified, is obtained by human action The posture into shop personnel out includes that hypervelocity such as is walked, runs at the action forms, also may include shake one's fists, foot is kicked, throwing, is pushed away, is drawn It the unconventional movement such as pulls, can also include squatting down, lying down etc. to be related to the movement into shop personnel safety for a long time.
In step 30, the posture into shop personnel is analyzed, judges the abnormal behaviour type into shop personnel.
If identifying the posture into shop personnel is that hypervelocity such as is walked, runs at the action forms, or including shaking one's fists, foot kick, throwing, The unconventional movement such as push away, pull, it can be determined that the abnormal behaviour into shop personnel is order abnormal behaviour, or is identified at the personnel of shop In postures of squatting down or lie down etc. for a long time, it can be determined that be security exception behavior into shop human behavior.
In step 40, according to the abnormal behaviour type into shop personnel, corresponding prompting instruction is issued.
Instruction is reminded to can be a string of the tele commands or phonetic order sent by computer.For example, if judge into The abnormal behaviour of shop personnel is order abnormal behaviour, then can issue the tele command or voice for requiring to observe order into shop personnel Order can then issue the tele command inquired into shop personnel status if it is determined that being security exception behavior into shop human behavior Or voice command.
In step 50, according to instruction is reminded, prompting message is issued into shop personnel.
Prompting message can be sound prompting or sparkling prompting, can also be the alerting pattern that acousto-optic combines, by will be electric It is voice signal or optical signal that subcommand or phonetic order etc., which are reminded instruction morphing, is issued into shop personnel, such as voice messaging Prompting messages such as " ask not run in shop ".
A kind of unmanned shop order management method of the present invention, judges automatically the row into shop personnel according to video image Whether to belong to abnormal behaviour, and different promptings is issued for different abnormal behaviour automatically, realized in no Field Force Under conditions of management and long-range monitoring, ensure that the safety in shop and order are controllable.
In one embodiment, as shown in Fig. 2, step 30 specifically includes the following steps:
In step 311, video image input posture is marked into convolutional neural networks, obtains the node into shop personnel's posture Collection.
Convolutional neural networks can directly input original image and be identified, avoid and locate in advance to the complicated early period of image Reason, with deep learning ability, can by training identification it is specified into shop personnel's posture, and to same each into shop personnel's body The posture at a position constitutes the node collection of a posture.
In step 312, node collection is matched with posture node correlation database, obtains the posture into shop personnel.
Various posture forms can be preset in posture node correlation database, such as shake one's fists, foot is kicked, throwing, push away, pull it is unconventional Movement posture matches the posture of node collection with posture node correlation database, if matching result belong to shake one's fists, foot is kicked, throwing, It the movement such as pushes away, pull, then can obtain to be in into shop personnel and wrestle accordingly or parabolic, the postures such as pull.
In one embodiment, as shown in figure 3, step 30 specifically includes the following steps:
In step 321, video image input human testing convolutional neural networks are handled, are selected from image center Position of human body.
In step 322, video image input optical flow computation convolutional neural networks are handled, calculates the light of full figure Stream information.
In step 323, position of human body and full figure information are selected according to image center, calculate the light stream value of human body.
In step 324, by human body light stream value compared with given threshold, judge whether to be in into shop personnel hypervelocity walking or It runs posture.
Wherein, human testing convolutional neural networks can accurately identify the position of human body, and human body frame is selected;Optical flow computation Convolutional neural networks can calculate light stream value all in video image, and wherein light stream is using pixel in image sequence in the time The correlation between variation and consecutive frame on domain finds previous frame with corresponding relationship existing between present frame, to count A kind of method for calculating the motion information of object between consecutive frame, the light stream value of adjacent two frame is compared, and in reference picture frame The position of human body that frame is selected, it can be deduced that into the light stream value of shop personnel, reflect into the current motion information in shop of shop personnel, It then may determine that by the threshold value comparison of light stream value and setting, such as more than certain threshold value when advance shop personnel are in hypervelocity walking shape State then may determine that when being more than higher threshold value when advance shop personnel are in state of running.
In one embodiment, as shown in figure 4, step 30 specifically includes the following steps:
In step 331, video image input human testing convolutional neural networks are handled, are selected from image center Position of human body.
In step 332, video image input optical flow computation convolutional neural networks are handled, calculates the light of full figure Stream information.
In step 333, position of human body and full figure information are selected according to image center, calculate the light stream profile of human body.
In step 334, human body light stream profile is analyzed, judges whether to be in into shop personnel to squat down and posture or lie down Posture.
Wherein, light stream contour detecting refers in the digital picture comprising target and background, ignores background and target internal The influence of texture and noise jamming is realized the process of contour extraction of objects using certain technology and methods, utilizes image Center selects the Optic flow information of position of human body and full figure, can extract the substantially light stream profile into shop personnel.As shown in figure 5, Light stream profile into shop personnel is analyzed, judges whether to be in into shop personnel and squats down posture or posture of lying down mainly includes such as Lower step:
In step 3341, image expansion is carried out to the light stream profile into shop personnel, forms complete profile diagram.
Since the parts of body characteristics of motion of people is different, the image faults such as elementary contour might have discontinuously or hollow out Problem, therefore operated first using image expansion and profile is communicated to one piece.
In step 3342, high-pass filtering is carried out to profile diagram and obtains grain details.
In step 3343, contours extract is carried out to profile diagram, obtains the maximum profile of surround the area.
In step 3344, by the length-width ratio of the maximum profile of surround the area compared with given threshold, judge into shop personnel Posture whether be in squat down posture or posture of lying down.
Such as when the length-width ratio of profile belongs to the ratio (being greater than 1:5) when people stands, judgement should belong to station into shop personnel Vertical state;When the length-width ratio of profile belongs to ratio when people squats down, (when 1:2 to 1:5), judgement should belong to shape of squatting down into shop personnel State;When the length-width ratio of profile belongs to the ratio (being greater than 5:1) that people lies down, judgement should belong to the state of lying down into shop personnel.Such as sentence It is disconnected into shop personnel to belong to the state of squatting down and certain time or state of lying down should be in into shop personnel, then it judge that the behavior belonged to In security exception behavior.
In the present embodiment, it in step 50, is instructed according to reminding, to the basis for issuing safety prompt function information into shop personnel On, also inform administrative staff to in-situ processing security exception behavior, in an abnormal situation under, can directly call ambulance reach it is existing .
In one embodiment, based on the above embodiment in the human body light stream profile that is calculated, shop can also be calculated Interior crowd density, crowded state.As shown in fig. 6, including the following steps:
In step 335, using light stream profile as matching content, candidate human body frame and human body track to be updated are carried out Characteristic matching.
In step 336, best match is carried out to candidate human body frame and human body track to be updated, by human body frame information Update corresponding human body track.
In step 337, the positional relationship of the different imaging areas of shop inner video image is obtained, realizes human body track not With the automatic switchover of camera shooting head region.
In step 338, the human body track of comprehensive different imaging areas forms the full trace graphics of human body.
In step 339, the full trace graphics of human body are plotted in graph form in the two-dimensional top-down view in shop, by rail The current location of mark is plotted in the two-dimensional top-down view in shop in the form of coordinate points.
In step 340, the trajectory coordinates point number for counting each region, by the coordinate points number and given threshold in each region Compare, judges whether the region belongs to congestion state.
Wherein, human body track is motion profile figure into shop personnel in the picture, by human body track may determine that into Action path of the shop personnel in shop and the position in shop, by counting the coordinate points number of each region in shop, It may determine that personnel amount in shop in current region, by quantity compared with given threshold, if it exceeds given threshold, then may be used To judge that personnel amount is excessive in the shop in current region, current region is in congestion state.In order to judge the people in entire shop Member's situation and congestion state, need to obtain the video image of all areas in shop, therefore in the present embodiment, video image covers Cover space in entire shop, and the positional relationship of different imaging areas fix and it is known that using between adjacent camera position with The method that angular relationship realizes that object location information and trace information are shared can be following method:
As shown in fig. 7, camera A at a distance from camera B for d, A and B visual angle and place included angle of straight line be respectively α and β, object are x1, y1 relative to the transverse and longitudinal coordinate of camera 1, and the transverse and longitudinal coordinate relative to camera 2 is x2, y2, based on camera shooting The physics of photography of head, x1 and x2 can calculate from image, y1 and y2 then cannot, by x1 and x2 and can only take the photograph As between head position and angular relationship extrapolate.According to the available following formula (1) of geometric proof:
Based on the above principles, the mapping for realizing the ordinate an of camera and the abscissa of adjacent camera, has found The corresponding relationship into shop personnel track and other groups of every group of camera tracking, therefore some can into all point of rail marks of shop personnel To pass through this Composition as a complete track;Second is that when leaving into shop personnel from a camera view to another Camera view, the track can keep continuous.
In one embodiment, the full trace graphics of human body will be plotted to shop in graph form in above-described embodiment Further include following steps on the basis of in two-dimensional top-down view:
Geometric locus is analyzed, judges whether enter Staff Only region into shop personnel according to curved path;
If there is entering Staff Only region into shop personnel, then corresponding prompting instruction is issued.
In one embodiment, in order to carry out real-time monitoring and control to the environmental aspect in shop, as shown in figure 8, also Include the following steps:
Obtain relevant environment detection parameters in shop;
By relevant environment parameter values for detection compared with given threshold, if being more than given threshold, adjust automatically corresponding electric appliance Equipment.
In a step 60, relevant environment detection parameters in shop are obtained.
Environment measuring parameter includes carbon dioxide, TVOC, formaldehyde, smog, temperature and humidity and intensity of illumination etc., passes through titanium dioxide One of carbon sensor, TVOC sensor, formaldehyde sensor, smoke sensor device, Temperature Humidity Sensor and photosensitive sensor or Multiple sensors obtain.
In step 70, automatic to adjust if being more than given threshold by relevant environment parameter values for detection compared with given threshold Whole corresponding electric appliance equipment.
Such as the watt level according to Temperature Humidity Sensor temperature and humidity value automatic adjustment air-conditioning and blower detected;According to The power of illumination automatic brightness adjustment lamp detected by photosensitive sensor;Detecting gas concentration lwevel greater than given threshold When, it automatically opens vent window and interior is aerated;When detecting that TVOC grade is in slight pollution or middle rank pollution, automatically Vent window is opened to be aerated interior;When detecting that TVOC grade is in serious pollution, while automatically opening vent window Exit passageway is also automatically opened, lights safety channel flag board, and all into shop people in full shop audio call by interactive device Member leaves shops along exit passageway instruction, and administrative staff is notified to support to shop;When detecting that smoke sensor device level of smoke is When grade of catching fire, the sprinkler system of ceiling is automatically turned on, and automatically open vent window, automatically open exit passageway, point Bright safety channel flag board, voice reminder all personnel leaves shops along exit passageway instruction in shop, and notifies administrative staff It is supported to shop.
In one embodiment, on the basis of the above embodiments, further include following steps:
Region crowd density, environmental parameter are sent in shop and are shown.
On the one hand it can be used as to the guide into shop personnel, avoid congested area, on the other hand, severe in environmental parameter Under the conditions of, it reminds into shop personnel, reminds and pay attention to protecting into shop personnel.
Following embodiments that unmanned store management system is disclosed for the present invention, can be used for executing the above-mentioned unmanned shop of the disclosure Spread order management method embodiment.Undisclosed details in store management system embodiment unmanned for the disclosure please refers to this Unmanned shop order management method embodiment is disclosed.
Fig. 9 is a kind of structural block diagram of unmanned store management system in embodiment, including but not limited to: image capture module 81, computing module 82, server 83 and interactive module 84, computing module 82 connect respectively with image capture module 81 and server 83 It connects, server 83 is connect with interactive module 84, wherein computing module 82 includes deep learning network module 821 and image procossing Module 822;The server 83 includes order management module 831;
Image capture module 81, for obtaining shop inner video image.
Deep learning network module 821 obtains the appearance into shop personnel for carrying out human action identification to video image State.
Image processing module 822 judges the abnormal behaviour class into shop personnel for analyzing the posture into shop personnel Type.
Order management module 831, for issuing corresponding prompting instruction according to the abnormal behaviour type into shop personnel.
Interactive module 84, for according to instruction is reminded, personnel to issue prompting message into shop.
The function of modules and the realization process of effect are specifically detailed in above-mentioned unmanned shop order management in above system The realization process that step is corresponded in method, is not repeating herein.
Image capture module 81 can be camera, especially cover the fixed camera shooting of multiple positional relationships in full shop Head.
In one embodiment, as shown in Figure 10, deep learning network module can be posture mark convolutional neural networks Module 8211 obtains the node collection into shop personnel's posture for analyzing video image;Image processing module 822 will save Point set is matched with posture node correlation database, obtains the posture into shop personnel.
In one embodiment, as shown in figure 11, deep learning network module can also include but is not limited to posture mark Convolutional neural networks module 8211, human testing convolutional neural networks 8212 and optical flow computation convolutional neural networks 8213.People Physical examination surveys convolutional neural networks 8212 for analyzing video image, selects position of human body, optical flow computation from image center Convolutional neural networks 8213 calculate the Optic flow information of full figure for analyzing video image, and image processing module 822 Position of human body and full figure information are selected according to image center, calculates the light stream value and light stream profile of human body, and by human body light stream value Compared with given threshold, judges whether be in hypervelocity walking or posture of running into shop personnel, human body light stream profile is analyzed, Judge squat down posture or posture of lying down whether are in into shop personnel.
The unmanned store management system of the present invention is introduced with a complete embodiment below, it is to be understood that having During body implements aforementioned control method, it is not absolutely required to modules all in the present embodiment, it is only necessary to according to required Function, select corresponding functional module.As shown in figure 12, Figure 12 is the unmanned store management system block diagram of the present embodiment.
The unmanned store management system of the present embodiment includes: image capture module 91, computing module 92, server 93, interaction Module 94, communication module 95, sensor module 96 and remote terminal 97, computing module 92 respectively with image capture module 91, logical Believe that module 92 and server 93 connect, server module 93 also connects with sensor module 96, interactive module 94 and communication module 95 It connects, communication module 95 is also connect with remote terminal 97.
Image capture module 91 is complete including multiple cameras in shop, especially covering for obtaining shop inner video image The fixed camera of multiple positional relationships in shop, image capture module further includes that the video image information of acquisition is sent to figure As the image output interface of acquisition module.
Computing module 92 is used to carry out human action identification to video image, obtains the posture into shop personnel;And into shop The posture of personnel is analyzed, and judges the abnormal behaviour type into shop personnel, and exports order abnormal behaviour signal or safe different Chang Hangwei signal is to server 93.Computing module 92 can be the circuit board equipped with high-speed parallel computing capability processor, place Reason device can be GPU, FPGA or artificial intelligence chip, and computing module 92 includes image input interface, adopt for receiving image Collect the video image information that module 91 exports;Computing module 92, which is also run, includes and multiple cameras multiple convolution correspondingly Neural network ensemble, wherein include but is not limited to that posture marks convolutional neural networks, human body in each convolutional neural networks combination Convolutional neural networks and optical flow computation convolutional neural networks are detected, one camera of each convolutional neural networks alignment processing Video image information, multiple convolutional neural networks are responsible for the calculating of deep learning network and the operation of image processing algorithm, multiple Convolutional neural networks combination can be run simultaneously, and each convolutional neural networks are run simultaneously in a manner of frame synchronization and shared output Information.
Sensor module 96 is for obtaining relevant environment detection parameters in shop, and by relevant environment in acquired shop Parameter values for detection is sent to server 93.Sensor module 96 includes but is not limited to Temperature Humidity Sensor 962, illuminance sensor 963 and air quality sensor 964, wherein air quality sensor includes but is not limited to carbon dioxide sensor, TVOC sensing Device, formaldehyde sensor, smoke sensor device.
Server 93 includes order management module 931, safety management module 932, track following module 933, crowded judgement Module 934 and equipment adjustment module 935.Server 93 can be a computer, and computing function can be completed by being also possible to other Computer equipment or chip.
Order management module 931 issues corresponding prompting instruction for receiving order abnormal behaviour signal.
Safety management module 932 issues corresponding prompting instruction for receiving security exception behavior signal.
Track following module 933 is for receiving the human body light stream profile being calculated, the human body of comprehensive different imaging areas Track forms the full trace graphics of human body, and the full trace graphics of human body are plotted to the two-dimensional top-down view in shop in graph form In, the current location of track is plotted in the form of coordinate points in the two-dimensional top-down view in shop.
Crowded judgment module 934 is used to count the trajectory coordinates point number in each region, by the coordinate points number in each region with Given threshold compares, and judges whether the region belongs to congestion state.
Equipment adjustment module 935 is used for by relevant environment parameter values for detection compared with given threshold, if being more than given threshold, Then adjust automatically corresponding electric appliance equipment.
Interactive module 94 includes sound equipment 941 and display screen 942.
Sound equipment 941 is used to issue prompting message into shop personnel according to instruction is reminded.
Display screen 942 is detecting smoke sensor device smog for showing shop inner region crowd density and environmental parameter Grade is exit passageway path when catching fire grade, in display screen display shop, assists orderly to leave into shop personnel.
Communication module 95 includes LAN module 951 and the Internet module 952.
LAN module 951 is combined with multiple convolutional neural networks and is separately connected, and is combined to multiple convolutional neural networks Output information shared with realize into shop personnel track full shop tracking.
The Internet module 952 be used for by shop into shop personnel's trace information, abnormal behaviour information, crowded state and Equipment operation condition is sent to mobile terminal 97.Such as server 93 is repeatedly mentioning after receiving order abnormal behavior information When waking up invalid, is also sent and is reminded to remote terminal 97 by the Internet module 952, administrative staff is reminded to carry out manual intervention, Server 93 be can be after receiving safety behavior exception information, it is also logical while output safety is reminded to interactive module 94 It crosses the Internet module 952 and sends prompting to remote terminal 97, administrative staff is reminded to carry out manual intervention in time.Administrative staff may be used also With by remote terminal 97 to server 93 send instruct, such as send operational order, by equipment adjustment module 935 control or Equipment in adjusting shop can also be transmission voice messaging, and issue voice in shop by sound equipment 941.
Remote terminal 97 can be mobile device, such as mobile phone, plate, be also possible to stationary computers equipment.
A kind of unmanned store management system of the present invention, by being analyzed and determined in real time to the behavior posture into shop personnel, And different promptings is issued for different abnormal behaviour, it realizes under conditions of without artificial monitoring and intervention to unmanned value The safety in shop and the intelligent management of order are kept, the safety and controllability of unmanned shop operation are improved;By to shop inner ring The real-time monitoring and control of border parameter make to obtain comfortable shopping environment into shop personnel;By remote terminal to shape safe in shop The monitoring of condition and environmental aspect further improves the safety and controllability of shop operation.
The embodiments described above only express several embodiments of the present invention, and the description thereof is more specific and detailed, but simultaneously It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art It says, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to protection of the invention Range.

Claims (12)

1. a kind of unmanned shop order management method, which comprises the steps of:
Acquisition shop inner video image;
Human action identification is carried out to video image, obtains the posture into shop personnel;
Posture into shop personnel is analyzed, judges the abnormal behaviour type into shop personnel;
According to the abnormal behaviour type into shop personnel, corresponding prompting instruction is issued;
It is instructed according to reminding, issues prompting message into shop personnel.
2. a kind of unmanned shop order management method according to claim 1, which is characterized in that carry out people to video image Body action recognition obtains the posture into shop personnel, comprising:
Video image input posture is marked into convolutional neural networks, obtains the node collection into shop personnel's posture;
Node collection is matched with posture node correlation database, obtains the posture into shop personnel.
3. a kind of unmanned shop order management method according to claim 1, which is characterized in that carry out people to video image Body action recognition obtains the posture into shop personnel, comprising:
Video image input human testing convolutional neural networks are handled, select position of human body from image center;
Video image input optical flow computation convolutional neural networks are handled, the Optic flow information of full figure is calculated;
Position of human body and full figure information are selected according to image center, calculates the light stream value of human body;
By human body light stream value compared with given threshold, judge hypervelocity walking or posture of running whether are in into shop personnel.
4. a kind of unmanned shop order management method according to claim 1, which is characterized in that carry out people to video image Body action recognition obtains the posture into shop personnel, comprising:
Video image input human testing convolutional neural networks are handled, select position of human body from image center;
Video image input optical flow computation convolutional neural networks are handled, the Optic flow information of full figure is calculated;
Position of human body and full figure information are selected according to image center, calculates the light stream profile of human body;
Human body light stream profile is analyzed, judges whether be in squat down posture or posture of lying down into shop personnel.
5. a kind of unmanned shop order management method according to claim 4, which is characterized in that human body light stream profile into Row analysis, judges squat down posture or posture of lying down whether are in into shop personnel, comprising:
Image expansion is carried out to the light stream profile into shop personnel, forms complete profile diagram;
High-pass filtering is carried out to profile diagram and obtains grain details;
Contours extract is carried out to profile diagram, obtains the maximum profile of surround the area;
By the length-width ratio of the maximum profile of surround the area compared with given threshold, judges whether the posture into shop personnel is in and squat down Posture or posture of lying down.
6. a kind of unmanned shop order management method according to claim 4 or 5, which is characterized in that further include walking as follows It is rapid:
Using light stream profile as matching content, characteristic matching is carried out to candidate human body frame and human body track to be updated;
Best match is carried out to candidate human body frame and human body track to be updated, by human body frame information update to corresponding human body rail Mark;
The positional relationship of the different imaging areas of acquisition shop inner video image, realize human body track it is different camera shooting head regions from Dynamic switching;
The human body track of comprehensive different imaging areas, forms the full trace graphics of human body;
The full trace graphics of human body are plotted in graph form in the two-dimensional top-down view in shop, by the current location of track to sit The form of punctuate is plotted in the two-dimensional top-down view in shop;
The trajectory coordinates point number for counting each region in shop, by the coordinate points number in each region compared with given threshold, judgement should Whether region belongs to congestion state.
7. according to claim 1 to any one unmanned shop order management method described in 4, which is characterized in that further include as Lower step:
Obtain relevant environment detection parameters in shop;
According to relevant environment detection parameters, corresponding electric appliance equipment is automatically controlled.
8. a kind of unmanned shop order management method according to claim 7, which is characterized in that further include following steps:
If relevant environment detection parameters are more than given threshold in shop, into shop, personnel are sounded an alarm, and indicate personnel in shop Leave shop.
9. a kind of unmanned store management system characterized by comprising image capture module, computing module, server and interaction Module, the computing module are separately connected with image capture module and server, and the server is connect with the interactive module, Wherein, the computing module includes deep learning network module and image processing module;The server includes order management mould Block;
Described image acquisition module, for obtaining shop inner video image;
The deep learning network module obtains the posture into shop personnel for carrying out human action identification to video image;
Described image processing module judges the abnormal behaviour type into shop personnel for analyzing the posture into shop personnel;
The order management module, for issuing corresponding prompting instruction according to the abnormal behaviour type into shop personnel;
The interactive module, for according to instruction is reminded, personnel to issue prompting message into shop.
10. a kind of unmanned store management system as claimed in claim 9, it is characterised in that:
The deep learning network module includes posture mark convolutional neural networks, and the mark convolutional neural networks are used for view Frequency image is analyzed, and is obtained the node collection into shop personnel's posture, and node collection is matched with posture node correlation database, is obtained Out into the posture of shop personnel.
11. a kind of unmanned store management system as described in claim 9 or 10, it is characterised in that:
The deep learning network module includes human testing convolutional neural networks and optical flow computation convolutional neural networks;
The human testing convolutional neural networks select position of human body from image center for analyzing video image;
The optical flow computation convolutional neural networks calculate the Optic flow information of full figure, and root for analyzing video image Position of human body and full figure information are selected according to image center, calculates the light stream value and light stream profile of human body;
Described image processing module is also used to analyze light stream value and light stream profile, judges whether be in hypervelocity into shop personnel It walks or runs posture or whether in squatting down posture or posture of lying down.
12. a kind of unmanned store management system as claimed in claim 9, it is characterised in that:
It further include environmental parameter detection module, the environmental parameter module is connect with the server;
The environmental parameter detection module is sent to server for obtaining relevant environment detection parameters in shop;
The server is also used to by relevant environment parameter values for detection compared with given threshold, if being more than given threshold, automatically Adjust corresponding electric appliance equipment.
CN201810448311.2A 2018-05-11 2018-05-11 A kind of unmanned shop order management method and system Pending CN110472458A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810448311.2A CN110472458A (en) 2018-05-11 2018-05-11 A kind of unmanned shop order management method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810448311.2A CN110472458A (en) 2018-05-11 2018-05-11 A kind of unmanned shop order management method and system

Publications (1)

Publication Number Publication Date
CN110472458A true CN110472458A (en) 2019-11-19

Family

ID=68504601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810448311.2A Pending CN110472458A (en) 2018-05-11 2018-05-11 A kind of unmanned shop order management method and system

Country Status (1)

Country Link
CN (1) CN110472458A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178277A (en) * 2019-12-31 2020-05-19 支付宝实验室(新加坡)有限公司 Video stream identification method and device
CN111881786A (en) * 2020-07-13 2020-11-03 深圳力维智联技术有限公司 Store operation behavior management method, device and storage medium
CN113971782A (en) * 2021-12-21 2022-01-25 云丁网络技术(北京)有限公司 Comprehensive monitoring information management method and system
CN114972126A (en) * 2022-07-29 2022-08-30 阿法龙(山东)科技有限公司 Intelligent monitoring system for lighting equipment based on intelligent vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102917207A (en) * 2012-10-24 2013-02-06 沈阳航空航天大学 Motion sequence based abnormal motion vision monitoring system
CN106022234A (en) * 2016-05-13 2016-10-12 中国人民解放军国防科学技术大学 Abnormal crowd behavior detection algorithm based on optical flow computation
CN106803265A (en) * 2017-01-06 2017-06-06 重庆邮电大学 Multi-object tracking method based on optical flow method and Kalman filtering
CN107145878A (en) * 2017-06-01 2017-09-08 重庆邮电大学 Old man's anomaly detection method based on deep learning
CN107609635A (en) * 2017-08-28 2018-01-19 哈尔滨工业大学深圳研究生院 A kind of physical object speed estimation method based on object detection and optical flow computation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102917207A (en) * 2012-10-24 2013-02-06 沈阳航空航天大学 Motion sequence based abnormal motion vision monitoring system
CN106022234A (en) * 2016-05-13 2016-10-12 中国人民解放军国防科学技术大学 Abnormal crowd behavior detection algorithm based on optical flow computation
CN106803265A (en) * 2017-01-06 2017-06-06 重庆邮电大学 Multi-object tracking method based on optical flow method and Kalman filtering
CN107145878A (en) * 2017-06-01 2017-09-08 重庆邮电大学 Old man's anomaly detection method based on deep learning
CN107609635A (en) * 2017-08-28 2018-01-19 哈尔滨工业大学深圳研究生院 A kind of physical object speed estimation method based on object detection and optical flow computation

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178277A (en) * 2019-12-31 2020-05-19 支付宝实验室(新加坡)有限公司 Video stream identification method and device
CN111178277B (en) * 2019-12-31 2023-07-14 支付宝实验室(新加坡)有限公司 Video stream identification method and device
CN111881786A (en) * 2020-07-13 2020-11-03 深圳力维智联技术有限公司 Store operation behavior management method, device and storage medium
CN111881786B (en) * 2020-07-13 2023-11-03 深圳力维智联技术有限公司 Store operation behavior management method, store operation behavior management device and storage medium
CN113971782A (en) * 2021-12-21 2022-01-25 云丁网络技术(北京)有限公司 Comprehensive monitoring information management method and system
CN114972126A (en) * 2022-07-29 2022-08-30 阿法龙(山东)科技有限公司 Intelligent monitoring system for lighting equipment based on intelligent vision
CN114972126B (en) * 2022-07-29 2022-10-21 阿法龙(山东)科技有限公司 Intelligent monitoring system for lighting equipment based on intelligent vision

Similar Documents

Publication Publication Date Title
CN110472458A (en) A kind of unmanned shop order management method and system
Kooij et al. Multi-modal human aggression detection
TWI492188B (en) Method for automatic detection and tracking of multiple targets with multiple cameras and system therefor
KR101168760B1 (en) Flame detecting method and device
KR101953342B1 (en) Multi-sensor fire detection method and system
CN110298231A (en) A kind of method and system determined for the goal of Basketball Match video
Rinsurongkawong et al. Fire detection for early fire alarm based on optical flow video processing
CN109886999A (en) Location determining method, device, storage medium and processor
CN109284735B (en) Mouse feelings monitoring method, device and storage medium
US11972352B2 (en) Motion-based human video detection
KR100822476B1 (en) Remote emergency monitoring system and method
CN110956118B (en) Target object detection method and device, storage medium and electronic device
CN111428681A (en) Intelligent epidemic prevention system
CN102737474A (en) Monitoring and alarming for abnormal behavior of indoor personnel based on intelligent video
CN116437538B (en) Dimming method and system for crowd gathering monitoring based on multifunctional intelligent lamp post
CN109670391A (en) Wisdom lighting device and Dynamic Recognition data processing method based on machine vision
TWI427562B (en) Surveillance video fire detecting and extinguishing system
CN109028231A (en) A kind of the cigarette stove all-in-one machine and oil smoke concentration detection method of view-based access control model gesture control
CN209101365U (en) A kind of cigarette stove all-in-one machine having unmanned identification function
CN115762043A (en) Intelligent building fire control guidance system
JP2006244140A (en) Human behavior detecting and optimum responding system
WO2023141381A1 (en) Computer vision system and methods for anomalous event detection
CN115798047A (en) Behavior recognition method and apparatus, electronic device, and computer-readable storage medium
CN115731563A (en) Method for identifying falling of remote monitoring personnel
CN111160945A (en) Advertisement accurate delivery method and system based on video technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191119