CN108222749A - A kind of intelligent automatic door control method based on image analysis - Google Patents
A kind of intelligent automatic door control method based on image analysis Download PDFInfo
- Publication number
- CN108222749A CN108222749A CN201711480203.5A CN201711480203A CN108222749A CN 108222749 A CN108222749 A CN 108222749A CN 201711480203 A CN201711480203 A CN 201711480203A CN 108222749 A CN108222749 A CN 108222749A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- image
- obtains
- region
- automatically
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000010191 image analysis Methods 0.000 title claims abstract description 20
- 238000012544 monitoring process Methods 0.000 claims abstract description 38
- 230000001815 facial effect Effects 0.000 claims description 26
- 238000012549 training Methods 0.000 claims description 23
- 210000005036 nerve Anatomy 0.000 claims description 11
- 239000000284 extract Substances 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000012935 Averaging Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims description 2
- 238000012360 testing method Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 3
- 238000013528 artificial neural network Methods 0.000 description 19
- 210000002569 neuron Anatomy 0.000 description 13
- 238000013507 mapping Methods 0.000 description 10
- 238000001514 detection method Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 230000004913 activation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 210000001015 abdomen Anatomy 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003703 image analysis method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
Classifications
-
- E—FIXED CONSTRUCTIONS
- E05—LOCKS; KEYS; WINDOW OR DOOR FITTINGS; SAFES
- E05F—DEVICES FOR MOVING WINGS INTO OPEN OR CLOSED POSITION; CHECKS FOR WINGS; WING FITTINGS NOT OTHERWISE PROVIDED FOR, CONCERNED WITH THE FUNCTIONING OF THE WING
- E05F15/00—Power-operated mechanisms for wings
- E05F15/70—Power-operated mechanisms for wings with automatic actuation
- E05F15/73—Power-operated mechanisms for wings with automatic actuation responsive to movement or presence of persons or objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- E—FIXED CONSTRUCTIONS
- E05—LOCKS; KEYS; WINDOW OR DOOR FITTINGS; SAFES
- E05Y—INDEXING SCHEME ASSOCIATED WITH SUBCLASSES E05D AND E05F, RELATING TO CONSTRUCTION ELEMENTS, ELECTRIC CONTROL, POWER SUPPLY, POWER SIGNAL OR TRANSMISSION, USER INTERFACES, MOUNTING OR COUPLING, DETAILS, ACCESSORIES, AUXILIARY OPERATIONS NOT OTHERWISE PROVIDED FOR, APPLICATION THEREOF
- E05Y2900/00—Application of doors, windows, wings or fittings thereof
- E05Y2900/10—Application of doors, windows, wings or fittings thereof for buildings or parts thereof
- E05Y2900/13—Type of wing
- E05Y2900/132—Doors
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Power-Operated Mechanisms For Wings (AREA)
Abstract
The invention discloses a kind of intelligent automatic door control methods based on image analysis, belong to technical field of data processing, and the monitoring area including the video camera to being installed on automatically-controlled door carries out background modeling, obtains initial background image;The image that video camera is currently acquired does difference with initial background image, obtains foreground image;In foreground image, pedestrian is detected and is positioned, obtain the corresponding center position in pedestrian region;Pedestrian region in continuous multiple frames image corresponds to center position, carries out the tracking of pedestrian position, obtains the direction of travel of pedestrian;Judge whether the direction of travel of pedestrian is parallel with horizontal direction;If automatically-controlled door is controlled to open the door, otherwise automatically-controlled door does not open the door.By handling the image comprising pedestrian, the ON/OFF of the direction controlling intelligence automatically-controlled door according to pedestrian's walking so that automatically-controlled door reduces the spoilage of intelligent automatically-controlled door suitable for the complex environments such as corridor.
Description
Technical field
The present invention relates to technical field of data processing, more particularly to a kind of intelligent automatic door control side based on image analysis
Method.
Background technology
Automatically-controlled door based on infrared sensor has been applied to each sphere of life of society.At present, infrared automatically-controlled door exists
Under some situations and do not apply to.For example, when some infrared automatically-controlled door is mounted on the side in corridor, there is what is much passed through from corridor
People is not intended to go out, but infrared door can be opened, because current infrared automatically-controlled door can not tell the walking side of pedestrian
To.Moreover, when flow of the people is sufficiently large on the corridor, infrared automatically-controlled door will be in high-frequency opening and closing state.One side
Face accelerates the damage of automatic red external door.On the other hand, since the high-frequency of automatically-controlled door is opened, also lead to the environment to periphery
Very big noise jamming is generated, influences the work and study of staff.
Invention content
The purpose of the present invention is to provide a kind of intelligent automatic door control methods based on image analysis, and it is automatic to improve intelligence
The intelligent level of door.
In order to achieve the above object, the technical solution adopted by the present invention is:
Using a kind of intelligent automatic door control method based on image analysis, include the following steps:
Background modeling is carried out to the monitoring area of video camera installed on intelligent automatically-controlled door, obtains initial background image;
The image that video camera is currently acquired does difference with initial background image, obtains foreground image;
In foreground image, pedestrian is detected and is positioned, obtain the corresponding center position in pedestrian region;
Pedestrian region in continuous multiple frames image corresponds to center position, carries out the tracking of pedestrian position, obtains
To the direction of travel of pedestrian;
Judge whether the direction of travel of pedestrian is parallel with horizontal direction;
If intelligent automatically-controlled door is then controlled to open the door, otherwise intelligent automatically-controlled door does not open the door.
Preferably, it after the direction of travel and horizontal direction parallel in pedestrian, further includes:
(a) motion track that pedestrian region in continuous multiple frames image corresponds to center position is obtained, obtains pedestrian's row
The speed walked;
(b) judge whether pedestrian's speed of travel is less than given threshold, if so then execute step (c), if otherwise performing step
(f);
(c) to pedestrian towards analyzing, and judge whether row facial orientation is front, if so then execute step (d), if
No execution step (f);
(d) pedestrian's face deflection angle is analyzed, and judge pedestrian's face deflection angle whether setting range with
It is interior, if performing step (e), if otherwise performing step (f);
(e) intelligent automatically-controlled door is controlled to open the door;
(f) intelligent automatically-controlled door does not open the door.
Preferably, background modeling is carried out to the monitoring area of video camera installed on intelligent automatically-controlled door described, obtained just
After beginning background image, further include:
The image that is currently acquired to video camera according to the regional extent of setting, initial background image extract, and are contracted
Present image, initial background image after small range;
Correspondingly, it is described by video camera current shooting to image and initial background image do difference, obtain foreground image,
Including:
Initial background image by the image after reducing the scope and after reducing the scope does difference, obtains foreground image.
Preferably, background modeling is carried out to the monitoring area of video camera installed on intelligent automatically-controlled door, obtains background image,
It specifically includes:
The monitoring area image of video camera is acquired, obtains N frame monitoring area images;
It is detected using frame difference method and whether there is moving target in monitoring area image;
If so, the monitoring area image of video camera is resurveyed, until obtaining the monitored space that moving target is not present in N frames
Area image is as background image;
The pixel of same pixel point position in the N frames background image is averaging, obtains initial background image;
Each pixel in the present frame monitoring area image of acquisition is analyzed, judges whether pixel category
In the initial background image;
If certain pixel belongs to background pixel, the pixel is updated to initial background image using the mode of weighting
In, to realize the update of initial background image.
Preferably, in foreground image, pedestrian is detected and is positioned, obtain the corresponding central point in pedestrian region
Position specifically includes:
The first training dataset formed using single pedestrian's image pattern, instructs the first nerves network of structure
Practice, obtain pedestrian detector;
In the acquisition image to reduce the scope, the region part of each pedestrian is extracted;
Each individual pedestrian detector will will be used as to input by individual pedestrian area, it is predicted by pedestrian detector
Afterwards, the center position of pedestrian in the current acquired image is exported.
Preferably, the pedestrian region in continuous multiple frames image corresponds to center position, carries out pedestrian position
Tracking, obtains the direction of travel of pedestrian, specifically includes:
Whole pedestrian areas that continuous multiple frames are acquired in image are sent to pedestrian detector to predict, obtain each row
Center position corresponding to people region;
The pedestrian area in consecutive frame is matched using minimum distance method, obtains belonging to same pedestrian in consecutive frame
Pedestrian area;
The pedestrian area for belonging to same person in consecutive frame is associated, and by consecutive frame, same pedestrian institute is right
The center line in region is answered, obtains the motion track of the pedestrian;
According to the motion track of the pedestrian, the direction of travel of the pedestrian is judged.
Preferably, the motion track that pedestrian region in continuous multiple frames image corresponds to center position is obtained, is gone
The speed of people's walking, specifically includes:
Continuously acquire motion track of some pedestrian in continuous N frame image range, and recording track initial point and knot
Spot;
By the difference divided by the frame image number of track end point and the distance of starting point, the average speed of pedestrian is obtained
Degree.
Preferably, pedestrian is specifically included towards analyzing:
Nervus opticus network and the second training dataset are built, and utilizes the nervus opticus of the second training data set pair structure
Network is trained, and obtains facial orientation detector;
The picture frame of input is normalized, using the image after normalization as the defeated of facial orientation detector
Enter, obtain that facial orientation in input picture frame is positive ratio and facial orientation is the ratio of side;
Facial orientation be positive ratio and facial orientation be side ratio in the higher facial orientation of selection percentage
Testing result as facial orientation detector.
Preferably, pedestrian's face deflection angle is analyzed, specifically included:
Build third nerve network and third training dataset, and using third training data set pair third nerve network into
Row training, obtains face deflection angle detector;
Input picture frame is cut, obtains remaining with the local image region of pedestrian head;
Local image region is normalized, the local image region after being normalized;
Using the local image region after normalization as the input of face deflection angle detector, face deflection angle is obtained
Degree.
Preferably, in foreground image, pedestrian is detected and is positioned, obtain the corresponding central point in pedestrian region
After position, further include:
Pedestrian area is cut, retains the arm regions of human body.
Thresholding, connected domain extraction operation are carried out to the arm regions of human body, obtain all connected domains of arm regions;
The connected domain corresponding to connected domain area maximum is chosen, and solves the boundary rectangle of the connected region;
When the length in external rectangular horizontal direction is less than the angle of vertical direction, intelligent automatically-controlled door is controlled to open.
Compared with prior art, there are following technique effects by the present invention:The present invention is by being mounted on intelligent automatically-controlled door
Camera is acquired the image of monitoring area, and the image collected is analyzed, and obtains the walking side of pedestrian
To, and when the direction of travel of pedestrian is vertical with camera, intelligent automatically-controlled door is controlled to open.Here it is mounted in infrared automatically-controlled door
During the side in corridor, in the direction of travel and horizontal direction parallel for judging pedestrian, intelligent automatically-controlled door is controlled to open.This programme leads to
It crosses and the image comprising pedestrian is handled, the ON/OFF of the direction controlling intelligence automatically-controlled door according to pedestrian's walking, with the prior art
In the mode of intelligent automatically-controlled door unlatching is just controlled to compare, automatically-controlled door mistake can be greatly reduced as long as there is pedestrian to pass through in corridor
The possibility of unlatching so that automatically-controlled door reduces the spoilage of intelligent automatically-controlled door suitable for the complex environments such as corridor.
Description of the drawings
Below in conjunction with the accompanying drawings, the specific embodiment of the present invention is described in detail:
Fig. 1 is a kind of flow diagram of the intelligent automatic door control method based on image analysis;
Fig. 2 is the definition schematic diagram of adjacent pixel;
Fig. 3 is neural network structure schematic diagram;
Fig. 4 is single neuronal structure schematic diagram;
Fig. 5 is a kind of overall procedure schematic diagram of the intelligent automatic door control method based on image analysis;
Fig. 6 is the flow diagram for waving to detect.
Specific embodiment
In order to illustrate further the feature of the present invention, reference should be made to the following detailed description and accompanying drawings of the present invention.Institute
Attached drawing is only for reference and purposes of discussion, is not used for limiting protection scope of the present invention.
As shown in Figure 1, present embodiment discloses a kind of intelligent automatic door control method based on image analysis, including as follows
Step:
S101, background modeling is carried out to the monitoring area of video camera installed on intelligent automatically-controlled door, obtains initial background figure
Picture;
S102, the image that video camera is currently acquired and initial background image do difference, obtain foreground image;
S103, in foreground image, pedestrian is detected and is positioned, obtains the corresponding central point position in pedestrian region
It puts;
S104, the pedestrian region in continuous multiple frames image correspond to center position, carry out pedestrian position with
Track obtains the direction of travel of pedestrian;
Whether S105, the direction of travel for judging pedestrian are parallel with horizontal direction;
S106, if intelligent automatically-controlled door is then controlled to open the door;
S107, otherwise intelligent automatically-controlled door does not open the door.
As further preferred scheme, due to the video camera on intelligent automatically-controlled door be it is fixed, can be to camera shooting
The monitoring area of machine carries out background modeling, background image is obtained, to reduce invalid search.It is adopted in step S101 in the present embodiment
Background modeling is carried out with multi-frame mean value and the method for real-time iterative, is specifically included:
(1) the monitoring area image of N frames (such as 300 frames) in total is acquired.
(2) it is detected using frame difference method with the presence or absence of moving target in N frame monitoring area images, if so, then resurvey,
Until the monitoring area amount of images that moving target is not present meets the requirements.It is detected whether there is in monitoring area image
The process of moving target is as follows:
A1, the processing of frame difference:For the i-th frame monitoring area image P currently obtainedi+1With previous frame monitoring area image Pi
It is poor to make frame, frame difference result is denoted as Di+1, i >=0:
Di+1(x, y)=| Pi+1(x,y)-Pi(x, y) |,
Wherein, Pi+1(x, y) is represented in Pi+1The corresponding pixel value in frame monitoring area image midpoint (x, y), pi(x, y) table
Show in PiThe corresponding pixel value in frame monitoring area image midpoint (x, y), Di+1(x, y) is represented in Di+1In a frame difference image,
The corresponding pixel value of point (x, y).
B1, thresholding operation:It is 20 according to threshold value, to frame difference image Di+1Thresholding operation is carried out, concrete principle is as follows:
In frame difference image Di+1In, to each of which pixel (x, y), if Di+1The value of (x, y) is more than 20, just by the point
(x, y) is remained as foreground pixel, while the pixel value of the point is become 1, if Di+1The value of (x, y) is less than 20, by this
The pixel value of point becomes 0, and final image is denoted as DXi+1。
It should be noted that the threshold value value for 20 one that be those skilled in the art obtain through many experiments for carrying out threshold
The empirical value of value operation.
B1, generation connected domain:Point traversal image DX pixel-by-pixeli+1If the pixel value of two neighboring pixel is not 0,
By in the cut-in to same connected domain of the two pixels, multiple connected domains are so obtained.Wherein, the adjacent definition of two pixels is such as
Shown in Fig. 2, for pixel x, pixel 1-8 is its adjacent pixel.
D1, the area for calculating connected domain:Areal analysis is carried out to each connected domain, if there is the face of some connected domain
Product (number of white pixel point) more than 30 pixels, then it is assumed that the connected domain area is excessive, that is, determines the Pi+1Frame contains fortune
Moving-target, and the frame is cast out.If the area of each connected domain is respectively less than or equal to 30 pixels, it is believed that Pi+1Frame does not include
Moving target retains the frame, and as background image, is denoted as Sj, 1≤j≤N.
Continue to frame Pi+2With frame Pi+1Step b to d is repeated, until the monitoring area image there is no moving target is expired
Sufficient N frames.
(3) using the N frame background images of above-mentioned acquisition, in each pixel position, average value is calculated, is initially carried on the back
Scape image, and it is denoted as B:
Wherein, B (x, y) represents the pixel value in initial background image B at point (x, y) position, Si(x, y) expression is being schemed
As SiPixel value at midpoint (x, y) position.
(4) in real-time detection process, each pixel in present frame monitoring area image is analyzed, if certain
Pixel belongs to background pixel, then using the mode of weighting, by current pixel update to background image, process is as follows:
A2, it will determine that whether pixel belongs to background pixel threshold value and be set as e=20, wherein, 20 be an empirical value.
B2, during real-time monitoring, it is assumed that present frame monitoring area image is Pj, then to PjCarry out point minute pixel-by-pixel
Analysis, if the pixel meets:|pj(x, y)-B (x, y) | < e then carry out the pixel operation of context update, otherwise right
Context update operation is not performed in the pixel.Wherein, Pj(x, y) is represented in present frame PjCorresponding picture at middle pixel (x, y)
Element value size.
C2, for initial background image B, image information is stablized relatively, therefore during background update, answers
The larger proportion of the holding, in present frame monitoring area image the context update operation of certain pixel (x, y) be:
B (x, y)=B (x, y) * 0.9+0.1*Pj(x,y)。
As further preferred scheme, after step slol, further include:
The image that is currently acquired to video camera according to the regional extent of setting, initial background image extract, and are contracted
Present image, initial background image after small range;
Correspondingly, it is described by video camera current shooting to image and initial background image do difference, obtain foreground image,
Including:
Initial background image by the image after reducing the scope and after reducing the scope does difference, obtains foreground image.
It should be noted that for intelligent automatic door unit, the surface of automatically-controlled door can be next there are one camera
Environment around monitoring, whether someone passes through detection surrounding, so as to which intelligent automatically-controlled door be controlled to open.But video camera is come
It says, monitoring area is often not limited solely to corridor, will also include the region that some are not in.These people will not go out
Existing region, can increase whether system diagnostics needs to open the time of automatically-controlled door.Based on this, the present embodiment is real for camera institute
When the picture frame that acquires, artificial one regional extent of setting (regional extent is rule of thumb to be set with practical matter), region
Personnel activity to seldom occur other than range.Picture frame is cut using the regional extent, so as to greatly improve
Detection speed reduces unnecessary interference.
As further preferred scheme, in step s 103:In foreground image, pedestrian is detected and is positioned,
The corresponding center position in pedestrian region is obtained, is specifically included:
(1) neural network Faster-RCNN is trained using the first training dataset, obtains pedestrian detector.Wherein
One training dataset includes disclosed data set and the data set oneself collected, data are grouped as the image of single pedestrian, number
Scale according to collection is 10000 pedestrian images, and specific training step is as follows:
A, 10000 pedestrian images are manually demarcated, obtains any image DiThe center position of middle pedestrian
That is the abdomen area position of pedestrian, is denoted as Yi, finally get 10000 groups of (Di,Yi) right.
B, with 10000 groups of DiAs input, 10000 groups of YiAs output, and DiRepresentative input and YiRepresentative
Output is one group of mapping data, so as to be fitted a kind of mapping relations F.It is given for any one after obtaining mapping relations F
Input, according to mapping relations F, to predict as a result, the i.e. output of mapping relations F.
This mapping relations are fitted using BP (back propagation) algorithms in the present embodiment.For Faster-
The network being made of multiple neurons for RCNN, as shown in figure 3, for single neuron BP algorithm specifically such as
Under:
It should be noted that the structure of simple nervelet network can be as shown in figure 4, wherein each circle represents one
Neuron, w1And w2The weight between neuron is represented, b represents biasing, and g (z) is activation primitive, so that output becomes non-
Linearly, a represents output, x1And x2Represent input, then for current structure, output is represented by:
A=g (x1*w1+x2*w2+ 1*b),
It is found that in the case where input data and activation primitive are constant, the value a of neural network output and weight and it is biased with
It closes.By adjusting different weights and biasing, the output of neural network also has different results.
Therefore, above-mentioned mapping relations F is obtained by the weight between neuron and biasing, for the power of different value
Weight and biasing, mapping relations F are also different.Therefore, the process of fitting mapping relations F:Find out a weight and biasing so that
Under this weight and biasing, mapping relations F is optimal.I.e. for the input D of given first sample training seti, the F that asks for
(Di) and YiError is minimum.
Value (predicted value) for the output of known neural network is a, it is assumed that its corresponding actual value is a', and BP algorithm is held
Row process is as follows:
(B-1) every connecting line weight (w of random initializtion1And w2) and biasing b;
(B-2) for input data x1、x2, BP algorithm can all first carry out fl transmission and obtain predicted value a;
(B-3) according to the error between actual value a' and predicted value aIn reverse feedback update neural network
The weight of every connecting line and every layer of biasing.Wherein, weight and the update mode of biasing are as follows:
W is sought respectively using E1、w2, b local derviation, wherein η represent learning rate, for the parameter pre-set.
(B-4) step (B-1) to (B-3) is constantly repeated, until network convergence, i.e. the value of E is minimum or keeps substantially not
Become, then it represents that network is trained to be finished.
(2) Faster-RCNN (it is an existing pedestrian detection frame, without structure) is completed after training,
The image S that camera acquires in real timei, by cutting, the obtained area image to reduce the scope is denoted as Ci.In image CiIn,
Multiple pedestrians may be included, therefore, from CiIn extract the region part of each pedestrian, then each individual pedestrian area
It will be as the input of individual Faster-RCNN.Specific execution is as follows:
A, to make frame with initial background image poor:For initial background image B, also carry out after trimming operation reduced the scope
Initial background image, be denoted as Bs.B at this timesWith CiSize it is identical, frame difference operation can be carried out, frame difference result is denoted as
Zi, frame difference principle is same as above.
B, binarization operation:To frame difference image ZiBinarization operation is carried out, threshold value is 20 (empirical values), is schemed after binaryzation
As being denoted as Ei。
C, connected domain is extracted:To image EiCarry out generation connected domain operation.
D, geomery judges:For EiThe each connected domain of image carries out asking for its boundary rectangle, if the connected domain
Boundary rectangle area is more than 100 pixels (empirical value), then it is assumed that the corresponding connected domain is pedestrian, records the external square
The top left co-ordinate of shape, and retain, otherwise the connected domain is deleted.
For the connected domain retained, due to the top left co-ordinate of its known boundary rectangle, therefore according to its coordinate position and
Rectangular dimension is in image CiMiddle to extract corresponding region, the region extracted will be inputted as the neural network of Faster-RCNN.
(3) after the prediction by Faster-RCNN, the output of network represents the pedestrian in the input picture
Center position.
In practical applications, since the walking of pedestrian is continuous, the tracking of pedestrian can be carried out in continuous multiframe,
The direction of travel of pedestrian is obtained, then above-mentioned steps S104:Pedestrian region in continuous multiple frames image corresponds to central point
Position carries out the tracking of pedestrian position, obtains the direction of travel of pedestrian, specifically includes:
(1) in continuous multiple frames acquire image, it is assumed that present frame Pi, PijFor j-th of pedestrian detection region under present frame.
By PijIt is sent to after Faster-RCNN predicted, the output data of the layer second from the bottom of the network is recorded, is labeled as
Tij(it is stored in vector form), as feature, so as to filter out the moving region of same person.
(2) assume frame image PiIn have j pedestrian area, and have been detected by the center of each pedestrian area, Pi+1
Equally there is j pedestrian area in frame image, and have been detected by the center of each pedestrian area, will belong to same in consecutive frame
The pedestrian area of one people associates, you can get the pedestrian moves towards track.
The method that minimum range is used in the present embodiment, realizes the matching of same pedestrian area in consecutive frame, specific mistake
Journey is as follows:
A, given frame PiIn j-th of pedestrian area feature be Tij, ask for frame PiIn k-th pedestrian area and frame Pi+1
In l-th of pedestrian area difference degree Xkl, calculation formula is:
Wherein, TikoRepresent feature TikO-th of component value, M represent feature vector total component number.
B, for frame PiIn k-th of pedestrian area, ask for itself and frame Pi+1In all pedestrian areas difference degree, record
In the case of difference degree obtains minimum value, corresponding frame Pi+1The number of middle pedestrian area, is denoted as g.Therefore frame PiIn k-th of pedestrian
Region and frame Pi+1In g-th of pedestrian area belong to same pedestrian, the two regions correspond, meet matching.
C, successively to frame PiIn remaining pedestrian area perform step (A) to (B), so as to find corresponding matching area.
(3) region of pedestrian same in consecutive frame is associated, then by consecutive frame, corresponding to same pedestrian
The output of the center, that is, Faster-RCNN in region carries out line, obtains the motion track of some pedestrian.
(4) according to motion track, the direction of pedestrian is obtained.Specially:If track lines approximation and horizontal direction parallel,
Then the region corresponding to the pedestrian then from original image is deleted without the intention of going out, reduces interference, be more accurate by the pedestrian
Judge whether that someone will go out to lay the foundation.
As shown in figure 5, as further preferred scheme, the present embodiment is in the direction of travel and level of above-mentioned judgement pedestrian
After direction is parallel, following steps are further included:
(a) motion track that pedestrian region in continuous multiple frames image corresponds to center position is obtained, obtains pedestrian's row
The speed walked;
(b) judge whether pedestrian's speed of travel is less than given threshold, if so then execute step (c), if otherwise performing step
(f);
(c) to pedestrian towards analyzing, and judge whether row facial orientation is front, if so then execute step (d), if
No execution step (f);
(d) pedestrian's face deflection angle is analyzed, and judge pedestrian's face deflection angle whether setting range with
It is interior, if performing step (e), if otherwise performing step (f);
(e) intelligent automatically-controlled door is controlled to open the door;
(f) intelligent automatically-controlled door does not open the door.
As further preferred scheme, the process analyzed the speed of pedestrian's walking is:
(1) motion track of some pedestrian in the range of continuous 10 frame, and recording track initial point and knot are continuously acquired
Spot.
(2) the average speed V of pedestrian is calculated, formula is as follows:
Wherein, L is expressed as the difference of the distance of track end point and starting point.
(3) if average speed V is more than 5 pixels (empirical value), then it is assumed that the pedestrian is the people to walk on corridor, no
With enabling, otherwise, pedestrian's direction is analyzed.
It should be noted that in practical applications, prepare the people to go out and the people of the straight line moving on corridor, their speed
Degree is different, therefore can detect whether that someone will go out by judging the speed of pedestrian.
As further preferred scheme, to pedestrian towards analyzing, process is as follows:
For the direct picture of face and side face image, difference is bigger, passes through the content to facial image
Analysis, the direction for judging current face are front or side.Equally, we are engaged in following by the way of neural network
Judgement task, specific implementation it is as follows:
(1) nervus opticus network is built:The nervus opticus network tool of structure is of five storeys, and first layer is input layer, is 300
Neuron, second to the 4th layer is hidden layer, contains 200,400,400 neurons respectively, last layer is that output layer is 2
Neuron.
(2) the second training dataset is built:In the data set, 6000 data sets are shared, wherein including 3000 people
Face image and 3000 people's side face images.
(3) it is trained using the second training data set pair nervus opticus network, then updates weight using BP algorithm,
Principle is identical with above-mentioned steps training Faster-RCNN.
(4) after treating network training, image can be predicted, judges that it belongs to someone's side face or just
Face, detailed process are:
A, in current picture frame, detection obtains pedestrian position and size, then extracts pedestrian area, and remember
For R.
B, image R is normalized, it is made to become the size of 30*10, then using the image after normalization as nerve
Network input (each pixel just for a neuron of neural network input layer, 300 pixels altogether, just
300 neurons of corresponding neural network input layer).
C, one two-dimentional vector of the output of neural network, one-component represent the ratio that this input data belongs to positive face
Example, second component represent the ratio that this input data belongs to side face.
(5) result of the highest value of selection percentage as neural network.If neural network judges current face, direction is
Side, then end operation, and inform automatically-controlled door, without opening.If neural network judges that current face's direction is front, right
Face deflection angle is analyzed.
It should be noted that after facial orientation image is obtained, it is also necessary to further judge facial orientation with front (just
To camera direction) the angle number of degrees, for the pedestrian of automatically-controlled door to be come in and gone out, facial orientation and camera direction (from
There are one video cameras for the surface of dynamic door) it is to meet certain angle number of degrees, if the angle number of degrees are excessive, automatically-controlled door is not opened
It opens, which is not intended to go out.Facial orientation and positive angle are judged at this equally by the way of neural network
The number of degrees, detailed process are as follows:
(1) third nerve network is built:The neural network tool of structure is of five storeys, and first layer is input layer, is 200 nerves
Member, second to the 4th layer is hidden layer, contains 300,300,400 neurons respectively, last layer is that output layer is 1 nerve
Member.
(2) third training dataset is built:The data set is located at the image of different directions comprising 5000 faces altogether.
(3) third nerve network is trained, specifically updates weight etc., training process and above-mentioned instruction using BP algorithm
It is identical to practice Faster-RCNN.
(4) after treating neural metwork training, image is predicted using it, judges the specific towards angle of face,
Shown in specific deterministic process following steps.
Image-region R is cut, is only retained above the R of regionPartly (it is long that human body head probably accounts for about entire body
Degree), and it is denoted as TH.Since image-region R is the overall region information of human body, therefore extract above itRegion, can be basic
Obtain the header information completed.
Image TH is normalized, it is made to become the size of 20*10, then using the image after normalization as nerve net
(for each pixel just for a neuron of neural network input layer, 200 pixels, just right altogether for the input of network
Answer 200 neurons of neural network input layer).
The output of neural network is the corresponding facial orientation angle of the input picture.
If the range of the end value of output notifies automatically-controlled door to open within positive and negative 20 ° (in a clockwise direction for just)
It opens, otherwise automatically-controlled door is not turned on.
It should be noted that a kind of intelligent automatic door control method based on image analysis provided in this embodiment, passes through
Camera acquires the image of monitoring area, and to the direction of travel of pedestrian, speed, facial orientation, face deflection angle in image
It is analyzed, can accurately judge the intention of going out of monitoring area one skilled in the art, so as to which intelligent automatically-controlled door be controlled to open, avoid intelligence
Can the frequent fault open of automatically-controlled door and accelerate its damage.
In practical applications, in order to cope with fortuitous event, leading to not automatic door opening, (for example above-mentioned judgement is mistakenly recognized
It is not to come up to the direction of door to determine the people, automatically-controlled door thus be not turned on), controlled the present embodiment provides emergency policy by waving
System is opened the door.As shown in fig. 6, detection realization process of waving is as follows:
(1) it is poor for currently equally making with background, the foreground area Q of human body can be obtained.
(2) above-mentioned pedestrian detection is carried out to foreground area, according to detection as a result, we can get the body of human body
Body torso area.
(3) for the foreground area Q in step 1, we delete in the body trunk region obtained in step 2 from Q
It removes, then the arm regions of human body are only left in the region after deleting substantially, are denoted as H.
(4) thresholding operation and generation connected domain operation are carried out to hand region H, threshold value is 20 (empirical values).
(5) it finds after step 4 operation, connected domain corresponding to connected domain area maximum (its corresponding arm regions,
The smaller connected domain of other areas is noise region), and it is denoted as Y.
(6) boundary rectangle is asked for connected domain Y, if the length of the horizontal direction of boundary rectangle is less than the length of vertical direction
Degree, then it is assumed that the meaning represented by connected domain Y is wave (when waving, arm is that vertical direction is presented), then tells certainly
Dynamic door needs to open, and is otherwise not turned on.
It should be noted that waving to detect by increasing in the present embodiment, in the equal mistake of above-mentioned judgement, lead to not automatic
It opens the door, unlatching automatically-controlled door can be controlled by pedestrian wave detection, further improve the accurate of automatically-controlled door unlatching,
And improve user experience.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all the present invention spirit and
Within principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention.
Claims (10)
- A kind of 1. intelligent automatic door control method based on image analysis, which is characterized in that including:Background modeling is carried out to the monitoring area of video camera installed on intelligent automatically-controlled door, obtains initial background image;The image that video camera is currently acquired does difference with initial background image, obtains foreground image;In foreground image, pedestrian is detected and is positioned, obtain the corresponding center position in pedestrian region;Pedestrian region in continuous multiple frames image corresponds to center position, carries out the tracking of pedestrian position, is gone The direction of travel of people;Judge whether the direction of travel of pedestrian is parallel with horizontal direction;If intelligent automatically-controlled door is then controlled to open the door, otherwise intelligent automatically-controlled door does not open the door.
- 2. the intelligent automatic door control method based on image analysis as described in claim 1, which is characterized in that be expert at described After the direction of travel and horizontal direction parallel of people, further include:(a)The motion track that pedestrian region in continuous multiple frames image corresponds to center position is obtained, obtains pedestrian's walking Speed;(b)Judge whether pedestrian's speed of travel is less than given threshold, if so then execute step(c)If otherwise perform step(f);(c)Pedestrian's direction is analyzed, and judges whether row facial orientation is front, if so then execute step(d), hold if not Row step(f);(d)Pedestrian's face deflection angle is analyzed, and judges pedestrian's face deflection angle whether within setting range, if It is to perform step(e)If otherwise perform step(f);(e)The intelligent automatically-controlled door of control opens the door;(f)Intelligent automatically-controlled door does not open the door.
- 3. the intelligent automatic door control method based on image analysis as claimed in claim 1 or 2, which is characterized in that described Background modeling is carried out to the monitoring area of video camera installed on intelligent automatically-controlled door, after obtaining initial background image, is further included:The image that is currently acquired to video camera according to the regional extent of setting, initial background image extract, and obtain reducing model Present image, initial background image after enclosing;Correspondingly, it is described by video camera current shooting to image and initial background image do difference, obtain foreground image, wrap It includes:Initial background image by the image after reducing the scope and after reducing the scope does difference, obtains foreground image.
- 4. the intelligent automatic door control method based on image analysis as claimed in claim 3, which is characterized in that described to intelligence The monitoring area for the video camera installed on automatically-controlled door carries out background modeling, obtains background image, specifically includes:The monitoring area image of video camera is acquired, obtains N frame monitoring area images;It is detected using frame difference method and whether there is moving target in monitoring area image;If so, the monitoring area image of video camera is resurveyed, until obtaining the monitoring area figure that moving target is not present in N frames As being used as background image;The pixel of same pixel point position in the N frames background image is averaging, obtains initial background image;Each pixel in the present frame monitoring area image of acquisition is analyzed, judges whether that pixel belongs to institute State initial background image;If certain pixel belongs to background pixel, the pixel is updated into initial background image using the mode of weighting, To realize the update of initial background image.
- 5. the intelligent automatic door control method based on image analysis as claimed in claim 3, which is characterized in that described in prospect In image, pedestrian is detected and is positioned, obtained the corresponding center position in pedestrian region, specifically include:The first training dataset formed using single pedestrian's image pattern, is trained the first nerves network of structure, obtains To pedestrian detector;In the acquisition image to reduce the scope, the region part of each pedestrian is extracted;Each individual pedestrian detector will will be used as to input by individual pedestrian area, it is defeated after being predicted by pedestrian detector Go out the center position of pedestrian in the current acquired image.
- 6. the intelligent automatic door control method based on image analysis as claimed in claim 5, which is characterized in that described according to even Pedestrian region in continuous multiple image corresponds to center position, carries out the tracking of pedestrian position, obtains the walking side of pedestrian To specifically including:Whole pedestrian areas that continuous multiple frames are acquired in image are sent to pedestrian detector to predict, obtain each pedestrian area Center position corresponding to domain;The pedestrian area in consecutive frame is matched using minimum distance method, obtains the row for belonging to same pedestrian in consecutive frame People region;The pedestrian area for belonging to same person in consecutive frame is associated, and by consecutive frame, area corresponding to same pedestrian The center line in domain, obtains the motion track of the pedestrian;According to the motion track of the pedestrian, the direction of travel of the pedestrian is judged.
- 7. the intelligent automatic door control method based on image analysis as claimed in claim 6, which is characterized in that the company of acquisition Pedestrian region corresponds to the motion track of center position in continuous multiple image, obtains the speed of pedestrian's walking, specifically includes:Continuously acquire motion track of some pedestrian in continuous N frame image range, and recording track initial point and end point;By the difference divided by the frame image number of track end point and the distance of starting point, the average speed of pedestrian is obtained.
- 8. the intelligent automatic door control method based on image analysis as claimed in claim 3, which is characterized in that described to pedestrian Towards being analyzed, specifically include:Nervus opticus network and the second training dataset are built, and utilizes the nervus opticus network of the second training data set pair structure It is trained, obtains facial orientation detector;The picture frame of input is normalized, using the image after normalization as the input of facial orientation detector, is obtained Into input picture frame, facial orientation is positive ratio and facial orientation is the ratio of side;Facial orientation be positive ratio and facial orientation be side ratio in the higher facial orientation conduct of selection percentage The testing result of facial orientation detector.
- 9. the intelligent automatic door control method based on image analysis as claimed in claim 3, which is characterized in that described to pedestrian Face deflection angle is analyzed, and is specifically included:Third nerve network and third training dataset are built, and is instructed using third training data set pair third nerve network Practice, obtain face deflection angle detector;Input picture frame is cut, obtains remaining with the local image region of pedestrian head;Local image region is normalized, the local image region after being normalized;Using the local image region after normalization as the input of face deflection angle detector, face deflection angle is obtained.
- 10. such as intelligent automatic door control method of the claim 1-9 any one of them based on image analysis, which is characterized in that Described in foreground image, pedestrian is detected and is positioned, after obtaining the corresponding center position in pedestrian region, It further includes:Pedestrian area is cut, retains the arm regions of human body;Thresholding, connected domain extraction operation are carried out to the arm regions of human body, obtain all connected domains of arm regions;The connected domain corresponding to connected domain area maximum is chosen, and solves the boundary rectangle of the connected region;When the length in external rectangular horizontal direction is less than the angle of vertical direction, intelligent automatically-controlled door is controlled to open.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711480203.5A CN108222749B (en) | 2017-12-29 | 2017-12-29 | Intelligent automatic door control method based on image analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711480203.5A CN108222749B (en) | 2017-12-29 | 2017-12-29 | Intelligent automatic door control method based on image analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108222749A true CN108222749A (en) | 2018-06-29 |
CN108222749B CN108222749B (en) | 2020-10-02 |
Family
ID=62647081
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711480203.5A Active CN108222749B (en) | 2017-12-29 | 2017-12-29 | Intelligent automatic door control method based on image analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108222749B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109636956A (en) * | 2018-10-26 | 2019-04-16 | 深圳云天励飞技术有限公司 | A kind of access control system control method, device and electronic equipment |
CN110316630A (en) * | 2019-06-03 | 2019-10-11 | 浙江新再灵科技股份有限公司 | The deviation method for early warning and system of elevator camera setting angle |
CN112560610A (en) * | 2020-12-03 | 2021-03-26 | 西南交通大学 | Video monitoring object analysis method, device, equipment and readable storage medium |
CN112562139A (en) * | 2020-10-14 | 2021-03-26 | 深圳云天励飞技术股份有限公司 | Access control method and device based on image recognition and electronic equipment |
CN112861593A (en) * | 2019-11-28 | 2021-05-28 | 宁波微科光电股份有限公司 | Elevator door pedestrian detection method and system, computer storage medium and elevator |
CN112850436A (en) * | 2019-11-28 | 2021-05-28 | 宁波微科光电股份有限公司 | Pedestrian trend detection method and system of elevator intelligent light curtain |
CN113501398A (en) * | 2021-06-29 | 2021-10-15 | 江西晶浩光学有限公司 | Control method, control device and storage medium |
CN113668976A (en) * | 2021-07-16 | 2021-11-19 | 广州大学 | Novel intelligence prevents trampling escape door system |
CN114019835A (en) * | 2021-11-09 | 2022-02-08 | 深圳市雪球科技有限公司 | Automatic door opening method and system, electronic device and storage medium |
CN114333134A (en) * | 2022-03-10 | 2022-04-12 | 深圳灏鹏科技有限公司 | Cabin management method, device, equipment and storage medium |
CN116591575A (en) * | 2023-07-18 | 2023-08-15 | 山东锐泽自动化科技股份有限公司 | Rotary door safety control method and system based on machine vision |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102747919A (en) * | 2012-06-18 | 2012-10-24 | 浙江工业大学 | Omnidirectional computer vision-based safe and energy-saving control device for pedestrian automatic door |
CN104463903A (en) * | 2014-06-24 | 2015-03-25 | 中海网络科技股份有限公司 | Pedestrian image real-time detection method based on target behavior analysis |
CN104751491A (en) * | 2015-04-10 | 2015-07-01 | 中国科学院宁波材料技术与工程研究所 | Method and device for tracking crowds and counting pedestrian flow |
US20150234477A1 (en) * | 2013-07-12 | 2015-08-20 | Magic Leap, Inc. | Method and system for determining user input based on gesture |
CN105869185A (en) * | 2016-04-15 | 2016-08-17 | 张志华 | Automatic door |
CN106529442A (en) * | 2016-10-26 | 2017-03-22 | 清华大学 | Pedestrian identification method and apparatus |
-
2017
- 2017-12-29 CN CN201711480203.5A patent/CN108222749B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102747919A (en) * | 2012-06-18 | 2012-10-24 | 浙江工业大学 | Omnidirectional computer vision-based safe and energy-saving control device for pedestrian automatic door |
US20150234477A1 (en) * | 2013-07-12 | 2015-08-20 | Magic Leap, Inc. | Method and system for determining user input based on gesture |
CN104463903A (en) * | 2014-06-24 | 2015-03-25 | 中海网络科技股份有限公司 | Pedestrian image real-time detection method based on target behavior analysis |
CN104751491A (en) * | 2015-04-10 | 2015-07-01 | 中国科学院宁波材料技术与工程研究所 | Method and device for tracking crowds and counting pedestrian flow |
CN105869185A (en) * | 2016-04-15 | 2016-08-17 | 张志华 | Automatic door |
CN106529442A (en) * | 2016-10-26 | 2017-03-22 | 清华大学 | Pedestrian identification method and apparatus |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109636956A (en) * | 2018-10-26 | 2019-04-16 | 深圳云天励飞技术有限公司 | A kind of access control system control method, device and electronic equipment |
CN110316630A (en) * | 2019-06-03 | 2019-10-11 | 浙江新再灵科技股份有限公司 | The deviation method for early warning and system of elevator camera setting angle |
CN112861593A (en) * | 2019-11-28 | 2021-05-28 | 宁波微科光电股份有限公司 | Elevator door pedestrian detection method and system, computer storage medium and elevator |
CN112850436A (en) * | 2019-11-28 | 2021-05-28 | 宁波微科光电股份有限公司 | Pedestrian trend detection method and system of elevator intelligent light curtain |
CN112562139A (en) * | 2020-10-14 | 2021-03-26 | 深圳云天励飞技术股份有限公司 | Access control method and device based on image recognition and electronic equipment |
CN112560610A (en) * | 2020-12-03 | 2021-03-26 | 西南交通大学 | Video monitoring object analysis method, device, equipment and readable storage medium |
CN113501398A (en) * | 2021-06-29 | 2021-10-15 | 江西晶浩光学有限公司 | Control method, control device and storage medium |
CN113501398B (en) * | 2021-06-29 | 2022-08-23 | 江西晶浩光学有限公司 | Control method, control device and storage medium |
CN113668976A (en) * | 2021-07-16 | 2021-11-19 | 广州大学 | Novel intelligence prevents trampling escape door system |
CN114019835A (en) * | 2021-11-09 | 2022-02-08 | 深圳市雪球科技有限公司 | Automatic door opening method and system, electronic device and storage medium |
CN114019835B (en) * | 2021-11-09 | 2023-09-26 | 深圳市雪球科技有限公司 | Automatic door opening method and system, electronic equipment and storage medium |
CN114333134A (en) * | 2022-03-10 | 2022-04-12 | 深圳灏鹏科技有限公司 | Cabin management method, device, equipment and storage medium |
CN116591575A (en) * | 2023-07-18 | 2023-08-15 | 山东锐泽自动化科技股份有限公司 | Rotary door safety control method and system based on machine vision |
CN116591575B (en) * | 2023-07-18 | 2023-09-19 | 山东锐泽自动化科技股份有限公司 | Rotary door safety control method and system based on machine vision |
Also Published As
Publication number | Publication date |
---|---|
CN108222749B (en) | 2020-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108222749A (en) | A kind of intelligent automatic door control method based on image analysis | |
Liu et al. | Human memory update strategy: a multi-layer template update mechanism for remote visual monitoring | |
Orabona et al. | Object-based visual attention: a model for a behaving robot | |
CN106951870B (en) | Intelligent detection and early warning method for active visual attention of significant events of surveillance video | |
CN107256386A (en) | Human behavior analysis method based on deep learning | |
CN109376637A (en) | Passenger number statistical system based on video monitoring image processing | |
CN108921879A (en) | The motion target tracking method and system of CNN and Kalman filter based on regional choice | |
CN106127812B (en) | A kind of passenger flow statistical method of the non-gate area in passenger station based on video monitoring | |
CN110135282B (en) | Examinee return plagiarism cheating detection method based on deep convolutional neural network model | |
CN111209832B (en) | Auxiliary obstacle avoidance training method, equipment and medium for substation inspection robot | |
WO2021004361A1 (en) | Face beauty level prediction method and device, and storage medium | |
CN109614907A (en) | Pedestrian recognition methods and device again based on characteristic strengthening guidance convolutional neural networks | |
CN113391607A (en) | Hydropower station gate control method and system based on deep learning | |
CN113223035A (en) | Intelligent inspection system for cage-rearing chickens | |
CN109045676A (en) | A kind of Chinese chess identification learning algorithm and the robot intelligence dynamicization System and method for based on the algorithm | |
Orabona et al. | A proto-object based visual attention model | |
CN109241830A (en) | It listens to the teacher method for detecting abnormality in the classroom for generating confrontation network based on illumination | |
CN107351080A (en) | A kind of hybrid intelligent research system and control method based on array of camera units | |
TW201308254A (en) | Motion detection method for comples scenes | |
Roggiolani et al. | Hierarchical approach for joint semantic, plant instance, and leaf instance segmentation in the agricultural domain | |
CN112616023A (en) | Multi-camera video target tracking method in complex environment | |
Atikuzzaman et al. | Human activity recognition system from different poses with cnn | |
Mlakić et al. | Deep learning method and infrared imaging as a tool for transformer faults detection | |
Ge et al. | Co-saliency-enhanced deep recurrent convolutional networks for human fall detection in E-healthcare | |
CN111582654B (en) | Service quality evaluation method and device based on deep cycle neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 230000 Yafu Park, Juchao Economic Development Zone, Chaohu City, Hefei City, Anhui Province Applicant after: ANHUI HUISHI JINTONG TECHNOLOGY Co.,Ltd. Address before: 102, room 602, C District, Hefei National University, Mount Huangshan Road, 230000 Hefei Road, Anhui, China Applicant before: ANHUI HUISHI JINTONG TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |