CN106203052A - Intelligent LED exchange method and device - Google Patents
Intelligent LED exchange method and device Download PDFInfo
- Publication number
- CN106203052A CN106203052A CN201610696775.6A CN201610696775A CN106203052A CN 106203052 A CN106203052 A CN 106203052A CN 201610696775 A CN201610696775 A CN 201610696775A CN 106203052 A CN106203052 A CN 106203052A
- Authority
- CN
- China
- Prior art keywords
- module
- answer
- face
- user
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000001514 detection method Methods 0.000 claims abstract description 88
- 230000002452 interceptive effect Effects 0.000 claims abstract description 68
- 230000003993 interaction Effects 0.000 claims abstract description 37
- 230000001755 vocal effect Effects 0.000 claims abstract description 25
- 239000000284 extract Substances 0.000 claims description 17
- 238000004458 analytical method Methods 0.000 claims description 11
- 230000005540 biological transmission Effects 0.000 claims description 11
- 230000000007 visual effect Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- 238000010205 computational analysis Methods 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 2
- 238000004321 preservation Methods 0.000 claims description 2
- 230000015572 biosynthetic process Effects 0.000 claims 1
- 238000003786 synthesis reaction Methods 0.000 claims 1
- 238000012790 confirmation Methods 0.000 abstract 1
- 238000000605 extraction Methods 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 3
- 235000013399 edible fruits Nutrition 0.000 description 3
- 210000005252 bulbus oculi Anatomy 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000035800 maturation Effects 0.000 description 1
- 239000011664 nicotinic acid Substances 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Landscapes
- Telephonic Communication Services (AREA)
Abstract
The present invention relates to a kind of intelligent LED interactive device, including central processing unit, memorizer, LED display, video acquisition module, face detection module, audio collection module, detection controller, watch identification module, voiceprint identification module, sound identification module, semantic module, Answer extracting module, Answer extracting module, network search module, voice synthetic module and audio playing module attentively.The invention still further relates to a kind of intelligent LED exchange method, this intelligent LED interactive device and method can work as the identity of time interactive user according to the face characteristic data of storage or vocal print characteristic with confirmation in interaction, avoid the situation being asked mutual people to interrupt interaction by other to occur, ensure that user and the accurate match of demand information in interaction, improve mutual intellectuality, optimize mutual experience sense and be subject to.
Description
Technical field
The present invention relates to a kind of intelligent LED exchange method, further relate to a kind of intelligent LED interactive device.
Background technology
Along with the development of electronic technology, the information such as advertisement, news, information, consulting more and more uses LED to show
Screen completes, and these display screens are widely distributed in the positions such as building, elevator, passageway, subway, bus station.Along with Consumer's Experience and sense
Required improves constantly, it is possible to the LED Display Technique carrying out intelligent interaction is occurred and develops.
Application publication number is that the Chinese invention patent application of CN104080010A (Application No. 201410295113.9) " is handed over
Formula information transmission system and method mutually ", wherein towards the face of display screen in disclosed interactive system identification foreground image, work as people
Face then plays that the first currently playing image, text and data of display screen is corresponding after exceeding the regular hour towards time of display screen
Two image, text and datas, and then be sent to display screen and play out.Interaction content man-machine in this system is only limitted in display screen storage
Content, and be the passive user that plays to, mutual limitation is big.And during Shi Yonging, user is in dynamical state, should
System cannot the dynamical state of user in real, i.e. cannot know that interactive user has been changed, so can not accurate
The demand meeting different user.
Application publication number is the Chinese invention patent application " base of CN102221881A (Application No. 201110131915.2)
In bionic proxy and the man-machine interaction method realizing tracking analysis of interest regions ", wherein disclosed exchange method, it is possible to calculate and use
Family eyeball focal position on screen, and then obtain user's eyeball focal position on screen, and then analyze user's concern
Interest region, thus realize nature, harmonious man-machine interaction.But this exchange method have ignored the dynamic shape of user too
State detects, and the situation that interest content is misinformated easily occurs.
Summary of the invention
To be solved by this invention first technical problem is that for above-mentioned prior art provide one can detect in real time
The dynamical state of user, to confirm current user identity, it is achieved and the intelligent LED friendship that between interactive user, information is the most mutual
Method mutually.
To be solved by this invention second technical problem is that for above-mentioned prior art provide one can be in mutual mistake
Journey detects and confirms user identity, to guarantee the intelligent LED interactive device of interactive information accuracy
The present invention solves the technical scheme that above-mentioned first technical problem used: a kind of intelligent LED exchange method, its
It is characterised by comprising the steps:
Step 1, initialization, the homepage content that LED display display sets;
Step 2, the video pictures in acquisition LED display front;
Step 3, in real time video pictures image is carried out Face datection, it may be judged whether with the presence of face;When screen picture figure
Face detected in Xiang, then enter step 4;
All of face coordinate and feature in step 4, traversal video pictures image, obtain coordinate and the feature of maximum face
Data;
Step 5, calculate and judge whether maximum face size exceedes the face dimension threshold of setting, if maximum face chi
The very little face dimension threshold exceeding setting, the characteristic of storage maximum face, and enter step 6;If maximum face size
Not less than the face dimension threshold set, then return step 2;
Step 6, judge whether the accumulated time of maximum face appearance in video pictures image exceedes the identification of setting
Time threshold, if it exceeds the recognition time threshold value set, then enters step 7, if not less than the recognition time threshold value set,
Delete the characteristic of maximum face, and return step 2;
Step 7, detection interaction mode mark, if interaction mode mark is shown as duty, delete the spy of maximum face
Levy data, and return step 2, if interaction mode mark is shown as idle condition, then start mutual;
Step 8, detecting whether to there is user speech information, if being not detected by the voice messaging of user, entering step 9,
If be detected that the voice messaging of user, then enter step 14;
Step 9, interaction mode mark is adjusted to visual interactive duty, by user corresponding for maximum face in step 4
As when time interactive user, analyze and work as the screen area that time interactive user is corresponding relative to the gaze angle of LED display, simultaneously
Respectively the cumulative time watching each watching area when time interactive user attentively is carried out timing;
Step 10, user is watched attentively the accumulated time of each watching area compare with the fixation time threshold value set respectively
Relatively, if the accumulated time that user watches each watching area attentively is respectively less than the fixation time threshold value set, then by interaction mode mark
Know and be adjusted to idle condition, delete the characteristic of maximum face, and return step 2;If user watches each watching area attentively
Accumulated time in have more than set fixation time threshold value, then carry out step 11;
Step 11 attentively, user on LED display is watched the accumulated time the longest watching area interest region as user,
More detailed content corresponding for current interest regional display content is played to user by LED display;
The video pictures of step 12, in real time acquisition LED display front;Judge in video pictures image when secondary interactive user
Whether corresponding face characteristic data there is also, and persistently play the content in step 10 if there is then LED display, if worked as
The face characteristic loss of data that secondary interactive user is corresponding, then carry out timing to the drop-out time of these face characteristic data;
If the drop-out time of the face characteristic data in step 13 step 12 exceedes the drop-out time threshold value of setting, then
LED display stops the broadcasting of Current Content, and then the homepage content that LED display display sets, and deletes when time interactive user
Corresponding face characteristic data, are adjusted to idle condition by interaction mode mark, and return step 2;
If the drop-out time of the face characteristic data in step 12 is without departing from the drop-out time threshold value set, then LED shows
Display screen continues to play Current Content until playing and terminating, and deletes and works as the face characteristic data that time interactive user is corresponding, by mutual shape
State mark is adjusted to idle condition, and returns step 2;
Step 14, interaction mode mark is adjusted to interactive voice duty, obtains when the voice messaging of time interactive user,
Extract and preserve when the vocal print feature of time interactive user;
Step 15, the voice messaging obtained is identified and semantic analysis, and then extracts answer, and answer is passed through
LED display carries out showing and/or answer synthesized voice transfer to when time interactive user;
Step 16, while step 15 is carried out, obtain in real time the video pictures in LED display front, and to the most right
Video pictures image carries out Face datection, it may be judged whether with the presence of face, if existed without face, then enters face drop-out time
Row accumulation timing, if the accumulated time that face is lost exceedes interactive voice user's drop-out time threshold value of setting, ties the most immediately
Bundle step 15, and delete when the vocal print feature of time interactive user, the homepage content that LED display display sets, by interaction mode
Mark is adjusted to idle condition, returns step 2 simultaneously;
While step 15 is carried out, if receiving voice messaging, not response;
Step 17, set waiting time threshold range in, if getting voice messaging, then enter step 18;As
Fruit does not gets voice messaging, then delete when the vocal print feature of time interactive user, and LED display shows the homepage content set,
Interaction mode mark is adjusted to idle condition, and returns step 2;
Step 18, judge whether vocal print feature corresponding to voice messaging obtained is when the vocal print feature of time interactive user,
If it is, enter step 15;If it is not, then enter step 17.
As in improvement, step 15, the voice messaging obtained is identified and semantic analysis comprises the steps:
Step 15.1, identification voice messaging, and voice messaging is converted to speech text;
Step 15.2, speech text is carried out participle, extract the core word in speech text and key word;
Step 15.3, according to the core word of speech text extracted and key word, search in local knowledge base and extract
Answer;
Step 15.4, judge whether answer extracts successfully, if Answer extracting success, then enter step 15.9, otherwise enter
Enter step 15.5;
Step 15.5, according to the core word of speech text extracted and key word, search wide area network or the Internet to obtain
Answer;
Step 15.6, judge whether to search answer, if searching answer to enter step 15.8 and step 15.9, otherwise
Enter step 15.7;
Step 15.7, LED display show and/or voice message answer obtains unsuccessfully, and record answer simultaneously obtains unsuccessfully
Problem, in case manually adding answer;
Step 15.8, preservation answer are to local knowledge base;
Step 15.9, answer is carried out showing and/or answer synthesized voice transfer to when time handing over by LED display
User mutually.
In order to reduce the acquisition time of answer, according to core word and the key word of the speech text extracted, increase corresponding core
Heart word and the weight of key word, meanwhile, increase the weight of this core word answer corresponding with key word;
In step 15.3, when core word multiple answer corresponding with key word, then extract the answer that weight is high.
Preferably, when obtaining voice messaging, whether the audio frequency sound intensity that detection voice messaging is corresponding exceeds the audio sound set
Strong threshold value, if the audio frequency sound intensity of voice messaging is not less than the audio frequency sound intensity threshold value set, is considered as not receiving voice messaging,
If the audio frequency sound intensity of voice messaging is beyond the audio frequency sound intensity threshold value set, then it is considered as receiving voice messaging.
Preferably, in step 8, in the voice messaging detection time threshold set, voice messaging is detected, if set
Voice messaging detected in fixed voice messaging detection time threshold, be then considered as detecting the voice messaging of user, if set
It is not detected by voice messaging in fixed voice messaging detection time threshold, is then considered as being not detected by the voice messaging of user.
The present invention solves the technical scheme that above-mentioned second technical problem used: a kind of intelligent LED interactive device, its
It is characterised by including:
Central processing unit, is used for carrying out data process and transmitting control commands;
Memorizer, is connected with described central processing unit, is used for storing data;Described memorizer have for storage prestore
Word, picture, video, the content storage unit and for storage problem, the local knowledge base of answer of prestoring of voice;
LED display, is connected with described central processing unit, is used for showing word, picture, video;
Video acquisition module, for gathering the video pictures in LED display front;
Face detection module, is connected with described video acquisition module, is used for detecting and obtain video acquisition module transmission
Video pictures in the coordinate of face and characteristic;
Audio collection module, for gathering the voice messaging of user;
Detection controller, is connected with described face detection module, audio collection module respectively, compares face for calculating
The size of face dimension threshold of size and setting, and detect whether to there is user speech information;
Watch identification module attentively, be connected with described detection controller and central processing unit respectively, for computational analysis user
Watch attentively angle and the time of each watching area of LED display, and then obtain the interest region of user on LED display;
Voiceprint identification module, is connected with described audio collection module and detection controller respectively, is used for identifying that audio frequency is adopted
Vocal print feature in the user speech information that collection module transmits;
Sound identification module, is connected, for discriminatory analysis sound with described audio collection module and detection controller respectively
Frequently acquisition module transmit user speech information and user speech information is converted to speech text;
Semantic module, is connected with described sound identification module and central processing unit respectively, is used for analyzing voice and knows
The speech text that other module transmits is to extract the core word in speech text and key word, and then by core word and key word transmission
To central processing unit;
Answer extracting module, is connected with the local knowledge base in described central processing unit and memorizer respectively, according to institute
State core word and key word that central processing unit transmits, in local knowledge base, search for and extract answer, and then by answer transmission
To central processing unit;
Network search module, is connected with described central processing unit, when Answer extracting module does not carries in local knowledge base
When getting answer, according to the control command of described central processing unit, by web search answer;
Voice synthetic module, is connected with described central processing unit, for the answer that central processing unit transmits being synthesized
Speech audio;
Audio playing module, is connected with described voice synthetic module, central processing unit, memorizer respectively, is used for playing
Voice data in the speech audio synthesized in voice synthetic module and memorizer.
Compared with prior art, it is an advantage of the current invention that: intelligent LED exchange method and intelligent LED in the present invention are handed over
Device, in use can detect dynamical state and the identity characteristic of user in real time mutually, to confirm current user identity,
Thus realize user and the matching of demand information in interaction, it is ensured that the content of coupling is sent to user accurately, makes
Interaction is more intelligent, it is to avoid the waste of mutual resource, improve mutual accuracy, effectiveness.
Accompanying drawing explanation
Fig. 1 is the structured flowchart of intelligent LED interactive device in the embodiment of the present invention.
Fig. 2 is the flow chart that in the embodiment of the present invention, intelligent LED is mutual.
Detailed description of the invention
Below in conjunction with accompanying drawing embodiment, the present invention is described in further detail.
As it is shown in figure 1, the intelligent LED interactive device in the present embodiment includes: central processing unit 1, memorizer 2, LED show
Screen 3, video acquisition module 4, face detection module 5, audio collection module 6, detection controller 7, watch identification module 8, vocal print attentively
Identification module 9, sound identification module 10, semantic module 11, Answer extracting module 12, network search module 13, voice close
Become module 14 and audio playing module 15.
Wherein central processing unit 1, is used for carrying out data process and transmitting control commands.
Memorizer 2 is connected with central processing unit 1, is used for storing data.Memorizer 2 in the present embodiment is provided with specially
For storing the content storage unit 21 that prestores of prestore word, picture, video, voice, and for storage problem, the basis of answer
Ground knowledge base 22.
LED display 3 is connected with central processing unit 1, according to the control of central processing unit 1, the literary composition in display-memory 2
The contents such as word, picture, video.
Video acquisition module 4 is arranged on LED display 3, for gathering the video pictures in LED display 3 front.This reality
Execute the video acquisition module 4 in example and can select photographic head, implement to gather the video pictures in LED display 3 front.
Face detection module 5 is connected with video acquisition module 4, and this face detection module 5 can use in prior art
Human face detection device or integrated chip, this face detection module 5 may be used for detection and obtain video acquisition module 4 transmit
Video pictures in the coordinate of face and characteristic.
Audio collection module 6 is for gathering the voice messaging of user, and this audio collection module 6 may be mounted at LED and shows
On screen 3, it is also possible to being arranged near LED display 3, the audio collection module 6 in the present embodiment can use microphone.
Detection controller 7 is connected is operated with face detection module 5, audio collection module 6 respectively, and this detection controls
Device 7 can select single-chip microcomputer.This detection controller 7 compares the size of face and the big of the face dimension threshold of setting for calculating
Little, thus judge whether to start the interactive operation of this intelligent LED interactive device.This detection controller 7 can also detect whether to deposit
In user speech information, the judgement of user speech information can the sound intensity threshold value of setting audio as required, thus judge to gather
To user speech information whether.
Watching identification module 8 attentively to be connected with detection controller 7 and central processing unit 1 respectively, it is permissible that this watches identification module 8 attentively
Use in prior art and watch identification device finished product attentively or existing watch the integrated chip of identification attentively.This watch attentively identification module 8 for
Computational analysis user watches attentively angle and the time of each watching area of LED display 3, and then obtains user on LED display 3
Interest region.
Voiceprint identification module 9 is connected with audio collection module 6 and detection controller 7 respectively, and this voiceprint identification module 9 can
To use existing voice print identification device or integrated chip, this voiceprint identification module 9 is used for identifying that audio collection module 6 transmits
User speech information in vocal print feature, thus multiple be capable of deciding whether after voice messaging as same use analyzing
The vocal print feature at family, the convenient identity confirming user, and then be sent to analysis result detect in controller 7.
Sound identification module 10 is connected with audio collection module 6 and detection controller 7 respectively, this sound identification module 10
Existing speech recognition equipment or integrated chip can be used.Sound identification module 10 is for discriminatory analysis audio collection module
User speech information is also converted to speech text by the 6 user speech information transmitted.
Semantic module 11 is connected with sound identification module 10 and central processing unit 1 respectively, this semantic module
11 can use existing speech analysis means or integrated chip.Semantic module 11 may be used for analyzing speech recognition mould
The speech text that block 10 transmits is to extract the core word in speech text and key word, and then core word and key word is sent to
In central processing unit 1, be sent to the number of times of central processing unit 1 according to core word and key word, can arrange corresponding core word and
The weight of key word, in use utilizes its weight to be controlled to facilitate.
Answer extracting module 12 is connected with the local knowledge base 22 in central processing unit 1 and memorizer 2 respectively.This answer
Extraction module 12 can use existing information retrieval device or integrated chip, and the search that this Answer extracting module 12 uses is calculated
Method can use various searching algorithm of the prior art.Answer extracting module 12 according to as described in central processing unit 1 transmit
Core word and key word, extract corresponding answer by ambiguous search queries in local knowledge base 22, and then by answering of extracting
Case is sent to central processing unit 1.Central processing unit 1 can also be arranged in local knowledge base 22 according to the frequency of the answer received
The weight of corresponding answer, thus the selection conveniently utilizing answer weight to carry out answer controls.
Network search module 13 is connected with central processing unit 1, can be realized by this network search module 13 and outside
Wide area network, the network of the Internet connect, and then when Answer extracting module 12 does not extracts answer in local knowledge base 22, root
According to the control command of central processing unit 1, this network search module 13 is by web search answer, and the answer transmission that will search
On the other hand to central processing unit 1, and then central processing unit 1 one aspect can control to show this answer, can be by
This answer stores to local knowledge base 22 in case using.Network search module 13 in the present embodiment can use existing net
Network searcher or integrated chip.
Voice synthetic module 14 is connected with central processing unit 1, for the answer that central processing unit 1 transmits is synthesized language
Sound audio frequency.This voice synthetic module 14 can use the speech synthetic device of existing maturation or integrated chip.
Audio playing module 15 is connected with voice synthetic module 14, central processing unit 1, memorizer 2 respectively, is used for playing
Voice data in the speech audio synthesized in voice synthetic module 14 and memorizer 2.This audio playing module 15 can make
With general microphone.Audio player software can be with centrally disposed processor 1.
Intelligent LED exchange method in the present embodiment, comprises the steps:
Step 1, initialize, central processing unit 1 transfer the word prestored in content storage unit 21 of memorizer 2, picture,
Video content, controls the homepage content that LED display 3 display sets, LED display 3 is divided into multiple viewing area, then sets
Homepage content show different contents in different viewing areas respectively;
Step 2, utilize video acquisition module 4 gather obtain LED display 3 front video pictures;
Step 3, the video pictures gathered according to video acquisition module 4, utilize face detection module 5 in real time to video pictures
Image carries out Face datection, it may be judged whether with the presence of face;When screen picture image detects face, then enter step 4;
Step 4, face detection module 5 is utilized to travel through all of face coordinate and feature in video pictures image, and then will
All of face coordinate and characteristic are sent to detect in controller 7, and detection controller 7 calculates the coordinate obtaining maximum face
And characteristic;
Step 5, detection controller 7 calculate and judge whether maximum face size exceedes the face dimension threshold of setting, as
Really maximum face size exceedes the face dimension threshold of setting, then the characteristic of maximum face is stored in detection controller 7
In, and enter step 6;If maximum face size is not less than the face dimension threshold set, then return step 2;
Step 6, detection controller 7 judge whether the accumulated time of maximum face appearance in video pictures image exceedes
The recognition time threshold value set, if it exceeds the recognition time threshold value set, then enters step 7, if not less than the knowledge set
Other time threshold, then the characteristic of the maximum face stored in deleting it, and return step 2;
Step 7, detection controller 7 detect the interaction mode mark that it is interior, if mutual shape current in detection controller 7
State mark is shown as duty, then the characteristic of the maximum face stored in deleting detection controller 7, and returns step 2, as
In fruit detection controller 7, current interaction mode mark is shown as idle condition, then start mutual;
Step 8, detection controller 7 detect the user speech information that audio collection module 6 gathers, and then detection controller 7
Judge whether user speech information;
When detection controller 7 obtains voice messaging, whether the audio frequency sound intensity that detection voice messaging is corresponding controls beyond detection
The audio frequency sound intensity threshold value set in device 7, if the audio frequency sound intensity of voice messaging is not less than the audio frequency sound intensity threshold value set, is considered as
Do not receive voice messaging, if the audio frequency sound intensity of voice messaging is beyond the audio frequency sound intensity threshold value set in detection controller 7, then
It is considered as receiving voice messaging;Detection controller 7 detects voice letter in its voice messaging set detection time threshold simultaneously
Breath, if voice messaging being detected in the voice messaging detection time threshold set, is then considered as detecting the voice letter of user
Breath, if being not detected by voice messaging in the voice messaging detection time threshold set, is then considered as being not detected by the language of user
Message ceases;
If detection controller 7 is not detected by the voice messaging of user, enter step 9, if detection controller 7 detects
To the voice messaging of user, then enter step 14;
The interaction mode mark that it is interior is adjusted to visual interactive duty, by step 4 by step 9, detection controller 7
User corresponding to maximum face as when time interactive user, watches the maximum face that identification module 8 obtains in detection controller 7 attentively special
Levy data, and analyze maximum face characteristic corresponding when time interactive user is relative to the gaze angle of LED display 3, enter
And analyze when time interactive user all of watching area on screen, the most respectively each watched attentively when time interactive user is watched attentively
The cumulative time in region carries out timing;
Step 10, watch identification module 8 attentively user is watched attentively the note that the accumulated time of each watching area sets with it respectively
Compare depending on time threshold;
If the accumulated time that user watches each watching area attentively is respectively less than the fixation time threshold value set, then watch identification attentively
Module 8 works as the information of time visual interactive end-of-job respectively to detection controller 7 and central processing unit 1 feedback, detects controller 7
Then its interior interaction mode is identified and be adjusted to idle condition, and delete the characteristic of the maximum face of its interior storage, and in
Central processor 1 then controls LED display 3 and still shows homepage content, and returns step 2;
Have more than the fixation time threshold value set if user watches attentively in the accumulated time of each watching area, then walk
Rapid 11;
Step 11, watch identification module 8 attentively and watch user on LED display 3 attentively watching area identification that accumulated time is the longest
For the interest region of user, watching identification module 8 attentively and recognition result is sent to central processing unit 1, central processing unit 1 controls LED
More detailed content corresponding for current interest regional display content is shown and plays to user by display screen 3, if LED display 3
Play is video content, then central processing unit 1 controls audio playing module 15 simultaneously and plays corresponding voice data;
Step 12, during LED display 3 plays the detailed content in interest region, utilize video acquisition module 4 real-time
Obtain the video pictures in LED display 3 front and be sent to face detection module 5;Face detection module 5 is by video pictures image
In all of face coordinate and feature be sent to detect controller 7, detection controller 7 by the new face characteristic data that obtain with
In it, the characteristic of the maximum face of storage contrasts, thus judges in video pictures image when secondary interactive user is corresponding
Whether face characteristic data there is also;
If detection controller 7 judges that the face characteristic data when time interactive user is corresponding exist, then watch identification module 8 attentively
Detection acquiescence is watched LED display 3 attentively when time interactive user and is watched the broadcasting content of LED display 3, thus will watch result transmission attentively
To central processing unit 1, then central processing unit 1 controls LED display 3 and persistently plays the content in step 10;
If detection controller 7 judges that when the face characteristic loss of data that time interactive user is corresponding, then detection controller 7 is right
The drop-out time of these face characteristic data carries out timing;
If what the drop-out time of the face characteristic data in step 13 step 12 set in exceeding detection controller 7 loses
Losing time threshold, what detection controller 7 stored in then deleting it works as the maximum face characteristic that time interactive user is corresponding, and will
Its interior interaction mode mark is adjusted to idle condition, and returns step 2;
Detection controller 7 sends when time result of interactive user loss to watching identification module 8 attentively simultaneously, watches identification module attentively
8 respective default are not watched LED display 3 attentively when time interactive user and are watched the broadcasting content of LED display 3, thus will watch knot attentively
Fruit is sent to central processing unit 1, and central processing unit 1 controls LED display 3 and stops the broadcasting of Current Content, and controls LED and show
The homepage content that screen 3 display sets;
If the drop-out time of the face characteristic data in step 12 is without departing from the drop-out time threshold value set, then LED shows
Display screen 3 continues to play Current Content until playing and terminating;
LED display 3 is play after terminating, and central authorities' central processing unit 1 then obtains time vision of working as of LED display 3 transmission and hands over
The information that interworking terminates, the information deserving time visual interactive end-of-job is sent to detect control through watching identification module 8 attentively
Device 7, then detection controller 7 is deleted and is worked as the face characteristic data that time interactive user is corresponding, simultaneously by its interior interaction mode mark
It is adjusted to idle condition, and returns step 2;
The interaction mode mark that it is interior is adjusted to interactive voice duty by step 14, detection controller 7, and detection controls
Device 7 controls voiceprint identification module 9 and sound identification module 10 receives the audio frequency sound intensity of audio collection module 6 transmission beyond detection control
The voice messaging of the audio frequency sound intensity threshold value set in device 7 processed, voiceprint identification module 9 is extracted the vocal print feature in voice messaging and passes
Delivering to detect controller 7, detection controller 7 stores when the vocal print characteristic of time interactive user;
Step 15, the voice messaging obtained is identified and semantic analysis, specifically includes following steps:
Step 15.1, sound identification module 10 identify voice messaging, and voice messaging is converted to speech text, and then will
This speech text is sent in semantic module 11;
Step 15.2, semantic module 11 carry out participle to the speech text obtained, and then extract in speech text
Core word and key word, and then core word and the key word of extraction are sent in central processing unit 1;
Step 15.3, central processing unit 1 adjust corresponding core word and pass according to the number of times receiving core word and key word
The weight of keyword, central processing unit 1 will be sent to Answer extracting module 12, answer to the core word received and key word simultaneously
Extraction module 12 uses corresponding searching algorithm to search for also in local knowledge base 22 according to the core word received and key word
Extracting answer, according to different searching requirements, Answer extracting module 12 can be according to wanting to the searching algorithm of local knowledge base 22
Seek the existing various searching algorithms of employing;Weight according to different core word and key word can extend use core word and key
The priority of word, and then Optimizing Search process, shorten search time;
Corresponding answer Search Results is sent to central processing unit 1 by step 15.4, Answer extracting module 12, if answer
Extraction module 12 searches answer in local knowledge base 22, then answer is sent to central processing unit 1, and central processing unit 1 is right
In local data base, the weight of this answer is adjusted, and then enters step 15.9, otherwise result failed for Answer extracting is passed
Deliver to central processing unit 1, enter step 15.5;Adjustment to answer weight can realize the application of answer priority, so exists
When multiple answer that can use occurs, answer more accurately can be selected according to the priority of answer;
Step 15.5, central processing unit 1 transmit aforesaid core word and key word and to network to network search module 13
Search module 13 sends the control command starting work, and it is aforesaid to obtain that network search module 13 searches for wide area network or the Internet
The answer that core word is corresponding with key word;
If step 15.6, network search module 13 search answer, then answer is sent to central processing unit 1, and then
Entering step 15.8 and step 15.9, otherwise network search module 13 returns to central processing unit 1 and obtains the result that answer is failed,
And then enter step 15.7;
Step 15.7, central processing unit 1 control LED display 3 and show and/or control audio playing module 15 voice message
Answer obtains unsuccessfully, and central processing unit 1 records answer and obtains failed problem, in case manually adding answer simultaneously;
Step 15.8, the answer that network search module 13 is searched by central processing unit 1 and the core word of correspondence and key
Word is saved in local knowledge base 22;
Step 15.9, central processing unit 1 control LED display 3 and show answer, and/or central processing unit 1 is by answer transmission
To voice synthetic module 14, answer is synthesized speech audio by voice synthetic module 14, and then is broadcast by audio playing module 15
Put to when time interactive user.
Step 16, while step 15 is carried out, utilize video acquisition module 4 to obtain regarding of LED display 3 front in real time
Frequently picture, and utilize face detection module 5 in real time video pictures image being carried out Face datection, it may be judged whether there is face to deposit
, if existed without face, then detection controller 7 carries out accumulation timing to face drop-out time, if the accumulation that face is lost
Time exceedes the interior interactive voice user's drop-out time threshold value arranged of detection controller 7, then be immediately finished step 15, detect simultaneously
The vocal print feature working as time interactive user that controller 7 stores in deleting it, and interaction mode mark that it is interior is adjusted to idle
State;Watch identification module 8 then respective default no interactions user attentively and do not watch LED display 3 attentively, thus will watch attentively during result is sent to
Central processor 1, central processing unit 1 controls the homepage content that LED display 3 display sets, returns step 2 simultaneously;
While step 15 is carried out, if detection controller 7 receives the voice messaging that audio collection module 6 transmits,
Not response;
Step 17, after secondary interactive voice completes, detection controller 7 resume waiting for obtain audio collection module 6 transmit
Voice messaging, if getting voice messaging in the waiting time threshold range that detection controller 7 sets, then enters step
18;If not getting voice messaging in the waiting time threshold range of detection controller 7 setting, then delete when time mutual use
The vocal print feature at family, the homepage content that LED display 3 display sets, interaction mode mark is adjusted to idle condition, and returns
Step 2;
Step 18, voiceprint identification module 9 obtain the new voice messaging that audio collection module 6 transmits, and then Application on Voiceprint Recognition
Module 9 is extracted the vocal print feature of new voice messaging and is sent to detect controller 7, and detection controller 7 is by new new voice
Vocal print feature corresponding to information contrasts with the vocal print feature of storage, and then judges that the vocal print that the voice messaging that obtains is corresponding is special
Whether be when the vocal print feature of time interactive user, if it is, enter step 15 if levying;If it is not, then enter step 17.
Claims (6)
1. an intelligent LED exchange method, it is characterised in that comprise the steps:
Step 1, initialization, the homepage content that LED display display sets;
Step 2, the video pictures in acquisition LED display front;
Step 3, in real time video pictures image is carried out Face datection, it may be judged whether with the presence of face;When in screen picture image
Face detected, then enter step 4;
All of face coordinate and feature in step 4, traversal video pictures image, obtain coordinate and the characteristic number of maximum face
According to;
Step 5, calculate and judge whether maximum face size exceedes the face dimension threshold of setting, if maximum face size surpasses
Cross the face dimension threshold set, the characteristic of storage maximum face, and enter step 6;If maximum face size does not surpasses
Cross the face dimension threshold set, then return step 2;
Step 6, judge whether the accumulated time of maximum face appearance in video pictures image exceedes the recognition time of setting
Threshold value, if it exceeds the recognition time threshold value set, then enters step 7, if not less than the recognition time threshold value set, deletes
The characteristic of maximum face, and return step 2;
Step 7, detection interaction mode mark, if interaction mode mark is shown as duty, delete the characteristic number of maximum face
According to, and return step 2, if interaction mode mark is shown as idle condition, then start mutual;
Step 8, detecting whether to there is user speech information, if being not detected by the voice messaging of user, entering step 9, if
The voice messaging of user detected, then enter step 14;
Step 9, interaction mode mark be adjusted to visual interactive duty, using user corresponding for maximum face in step 4 as
When secondary interactive user, analyze and work as the screen area that time interactive user is corresponding relative to the gaze angle of LED display, distinguish simultaneously
The cumulative time watching each watching area when time interactive user attentively is carried out timing;
Step 10, user watched attentively the accumulated time of each watching area compare with the fixation time threshold value set respectively,
If the accumulated time that user watches each watching area attentively is respectively less than the fixation time threshold value set, then interaction mode mark is adjusted
Whole for idle condition, delete the characteristic of maximum face, and return step 2;If user watches the tired of each watching area attentively
The long-pending time has more than the fixation time threshold value set, then carries out step 11;
Step 11 attentively, user on LED display is watched the accumulated time the longest watching area interest region as user, LED
More detailed content corresponding for current interest regional display content is played to user by display screen;
The video pictures of step 12, in real time acquisition LED display front;Judge in video pictures image when secondary interactive user is corresponding
Face characteristic data whether there is also, persistently play the content in step 10 if there is then LED display, if when time hand over
The face characteristic loss of data that user is corresponding mutually, then carry out timing to the drop-out time of these face characteristic data;
If the drop-out time of the face characteristic data in step 13 step 12 exceedes the drop-out time threshold value of setting, then LED shows
Display screen stops the broadcasting of Current Content, and then the homepage content that LED display display sets, and deletes when time interactive user is corresponding
Face characteristic data, are adjusted to idle condition by interaction mode mark, and return step 2;
If the drop-out time of the face characteristic data in step 12 is without departing from the drop-out time threshold value set, then LED display
Continue to play Current Content until playing and terminating, delete and work as the face characteristic data that time interactive user is corresponding, by interaction mode mark
Know and be adjusted to idle condition, and return step 2;
Step 14, interaction mode mark is adjusted to interactive voice duty, obtains when the voice messaging of time interactive user, extracts
And preserve when the vocal print feature of time interactive user;
Step 15, the voice messaging obtained is identified and semantic analysis, and then extracts answer, and answer is shown by LED
Display screen carries out showing and/or answer synthesized voice transfer to when time interactive user;
Step 16, while step 15 is carried out, obtain in real time the video pictures in LED display front, and in real time to video
Picture image carries out Face datection, it may be judged whether with the presence of face, if existed without face, then tires out face drop-out time
Long-pending timing, if the accumulated time that face is lost exceedes interactive voice user's drop-out time threshold value of setting, is then immediately finished step
Rapid 15, and delete when the vocal print feature of time interactive user, the homepage content that LED display display sets, interaction mode is identified
It is adjusted to idle condition, returns step 2 simultaneously;
While step 15 is carried out, if receiving voice messaging, not response;
Step 17, set waiting time threshold range in, if getting voice messaging, then enter step 18;If not
Get voice messaging, then delete when the vocal print feature of time interactive user, the homepage content that LED display display sets, will hand over
Status indicator is adjusted to idle condition mutually, and returns step 2;
Step 18, judge whether vocal print feature corresponding to voice messaging obtained is when the vocal print feature of time interactive user, if
It is then to enter step 15;If it is not, then enter step 17.
Intelligent LED exchange method the most according to claim 1, it is characterised in that: in step 15, to the voice messaging obtained
It is identified comprising the steps: with semantic analysis
Step 15.1, identification voice messaging, and voice messaging is converted to speech text;
Step 15.2, speech text is carried out participle, extract the core word in speech text and key word;
Step 15.3, according to the core word of speech text extracted and key word, search in local knowledge base and extract answer;
Step 15.4, judge whether answer extracts successfully, if Answer extracting success, then enter step 15.9, otherwise enter step
Rapid 15.5;
Step 15.5, according to the core word of speech text extracted and key word, search wide area network or the Internet are to obtain answer;
Step 15.6, judging whether to search answer, if searching answer to enter step 15.8 and step 15.9, otherwise entering
Step 15.7;
Step 15.7, LED display show and/or voice message answer obtains unsuccessfully, and record answer simultaneously obtains asking unsuccessfully
Topic, in case manually adding answer;
Step 15.8, preservation answer are to local knowledge base;
Step 15.9, answer is carried out showing and/or answer synthesized voice transfer by LED display use to when time mutual
Family.
Intelligent LED exchange method the most according to claim 2, it is characterised in that: according to the core of the speech text extracted
Word and key word, increase corresponding core word and the weight of key word, meanwhile, increase the power of this core word answer corresponding with key word
Weight;
In step 15.3, when core word multiple answer corresponding with key word, then extract the answer that weight is high.
Intelligent LED exchange method the most according to claim 1, it is characterised in that: when obtaining voice messaging, detection voice letter
Whether the audio frequency sound intensity that breath is corresponding exceeds the audio frequency sound intensity threshold value set, if the audio frequency sound intensity of voice messaging is not less than setting
Audio frequency sound intensity threshold value is then considered as not receiving voice messaging, if the audio frequency sound intensity of voice messaging is beyond the audio frequency sound intensity threshold set
Value, then be considered as receiving voice messaging.
Intelligent LED exchange method the most according to claim 4, it is characterised in that: in step 8, at the voice letter set
Detection voice messaging in breath detection time threshold, if detect that voice is believed in the voice messaging detection time threshold set
Breath, then be considered as detecting the voice messaging of user, if being not detected by voice in the voice messaging detection time threshold set
Information, then be considered as being not detected by the voice messaging of user.
6. one kind if realizing the intelligent LED of intelligent LED exchange method as described in Claims 1 to 5 any claim
Interactive device, it is characterised in that including:
Central processing unit, is used for carrying out data process and transmitting control commands;
Memorizer, is connected with described central processing unit, is used for storing data;Described memorizer has for storing the literary composition that prestores
Word, picture, video, the content storage unit and for storage problem, the local knowledge base of answer of prestoring of voice;
LED display, is connected with described central processing unit, is used for showing word, picture, video;
Video acquisition module, for gathering the video pictures in LED display front;
Face detection module, is connected with described video acquisition module, for detecting and obtain regarding of video acquisition module transmission
Frequently the coordinate of the face in picture and characteristic;
Audio collection module, for gathering the voice messaging of user;
Detection controller, is connected with described face detection module, audio collection module respectively, for calculating the chi comparing face
The very little size with the face dimension threshold set, and detect whether to there is user speech information;
Watching identification module attentively, be connected with described detection controller and central processing unit respectively, for computational analysis, user watches attentively
The angle of each watching area of LED display and time, and then obtain the interest region of user on LED display;
Voiceprint identification module, is connected with described audio collection module and detection controller respectively, is used for identifying audio collection mould
Vocal print feature in the user speech information that block transmits;
Sound identification module, is connected with described audio collection module and detection controller respectively, adopts for discriminatory analysis audio frequency
Collect the user speech information of module transmission and user speech information is converted to speech text;
Semantic module, is connected with described sound identification module and central processing unit respectively, is used for analyzing speech recognition mould
The speech text that block transmits is extracting the core word in speech text and key word, and then core word and key word is sent to
In central processor;
Answer extracting module, is connected with the local knowledge base in described central processing unit and memorizer, respectively in described
Core word that central processor transmits and key word, search for and extract answer in local knowledge base, and then answer is sent to
Central processor;
Network search module, is connected with described central processing unit, when Answer extracting module is not extracted in local knowledge base
During answer, according to the control command of described central processing unit, by web search answer;
Voice synthetic module, is connected with described central processing unit, for the answer that central processing unit transmits is synthesized voice
Audio frequency;
Audio playing module, is connected with described voice synthetic module, central processing unit, memorizer respectively, is used for playing voice
Voice data in the speech audio synthesized in synthesis module and memorizer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610696775.6A CN106203052A (en) | 2016-08-19 | 2016-08-19 | Intelligent LED exchange method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610696775.6A CN106203052A (en) | 2016-08-19 | 2016-08-19 | Intelligent LED exchange method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106203052A true CN106203052A (en) | 2016-12-07 |
Family
ID=57522292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610696775.6A Pending CN106203052A (en) | 2016-08-19 | 2016-08-19 | Intelligent LED exchange method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106203052A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106682221A (en) * | 2017-01-04 | 2017-05-17 | 上海智臻智能网络科技股份有限公司 | Response method and device for question and answer interaction and question and answer system |
CN107610704A (en) * | 2017-09-29 | 2018-01-19 | 珠海市领创智能物联网研究院有限公司 | A kind of speech recognition system for smart home |
CN108502656A (en) * | 2018-04-11 | 2018-09-07 | 苏州福特美福电梯有限公司 | Elevator sound control method and system |
CN108985862A (en) * | 2018-08-24 | 2018-12-11 | 深圳艺达文化传媒有限公司 | The querying method and Related product of elevator card |
CN109448455A (en) * | 2018-12-20 | 2019-03-08 | 广东小天才科技有限公司 | Recitation method for real-time error correction and family education equipment |
CN110310657A (en) * | 2019-07-10 | 2019-10-08 | 北京猎户星空科技有限公司 | A kind of audio data processing method and device |
CN111767785A (en) * | 2020-05-11 | 2020-10-13 | 南京奥拓电子科技有限公司 | Man-machine interaction control method and device, intelligent robot and storage medium |
CN112115244A (en) * | 2020-08-21 | 2020-12-22 | 深圳市欢太科技有限公司 | Dialogue interaction method and device, storage medium and electronic equipment |
CN113395630A (en) * | 2021-07-23 | 2021-09-14 | 深圳鑫宏力精密工业有限公司 | Method and device for voice interaction with mobile terminal through Bluetooth headset |
CN113965550A (en) * | 2021-10-15 | 2022-01-21 | 天津大学 | Intelligent interactive remote auxiliary video system |
CN114866693A (en) * | 2022-04-15 | 2022-08-05 | 苏州清睿智能科技股份有限公司 | Information interaction method and device based on intelligent terminal |
US11830289B2 (en) | 2017-12-11 | 2023-11-28 | Analog Devices, Inc. | Multi-modal far field user interfaces and vision-assisted audio processing |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080080743A1 (en) * | 2006-09-29 | 2008-04-03 | Pittsburgh Pattern Recognition, Inc. | Video retrieval system for human face content |
CN105701447A (en) * | 2015-12-30 | 2016-06-22 | 上海智臻智能网络科技股份有限公司 | Guest-greeting robot |
CN105872759A (en) * | 2015-11-25 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Method and system for automatically closing video playing |
-
2016
- 2016-08-19 CN CN201610696775.6A patent/CN106203052A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080080743A1 (en) * | 2006-09-29 | 2008-04-03 | Pittsburgh Pattern Recognition, Inc. | Video retrieval system for human face content |
CN105872759A (en) * | 2015-11-25 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Method and system for automatically closing video playing |
CN105701447A (en) * | 2015-12-30 | 2016-06-22 | 上海智臻智能网络科技股份有限公司 | Guest-greeting robot |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106682221B (en) * | 2017-01-04 | 2020-04-21 | 上海智臻智能网络科技股份有限公司 | Question-answer interaction response method and device and question-answer system |
CN106682221A (en) * | 2017-01-04 | 2017-05-17 | 上海智臻智能网络科技股份有限公司 | Response method and device for question and answer interaction and question and answer system |
CN107610704A (en) * | 2017-09-29 | 2018-01-19 | 珠海市领创智能物联网研究院有限公司 | A kind of speech recognition system for smart home |
US11830289B2 (en) | 2017-12-11 | 2023-11-28 | Analog Devices, Inc. | Multi-modal far field user interfaces and vision-assisted audio processing |
CN108502656A (en) * | 2018-04-11 | 2018-09-07 | 苏州福特美福电梯有限公司 | Elevator sound control method and system |
CN108985862A (en) * | 2018-08-24 | 2018-12-11 | 深圳艺达文化传媒有限公司 | The querying method and Related product of elevator card |
CN109448455A (en) * | 2018-12-20 | 2019-03-08 | 广东小天才科技有限公司 | Recitation method for real-time error correction and family education equipment |
CN110310657A (en) * | 2019-07-10 | 2019-10-08 | 北京猎户星空科技有限公司 | A kind of audio data processing method and device |
CN111767785A (en) * | 2020-05-11 | 2020-10-13 | 南京奥拓电子科技有限公司 | Man-machine interaction control method and device, intelligent robot and storage medium |
CN112115244A (en) * | 2020-08-21 | 2020-12-22 | 深圳市欢太科技有限公司 | Dialogue interaction method and device, storage medium and electronic equipment |
CN112115244B (en) * | 2020-08-21 | 2024-05-03 | 深圳市欢太科技有限公司 | Dialogue interaction method and device, storage medium and electronic equipment |
CN113395630A (en) * | 2021-07-23 | 2021-09-14 | 深圳鑫宏力精密工业有限公司 | Method and device for voice interaction with mobile terminal through Bluetooth headset |
CN113395630B (en) * | 2021-07-23 | 2023-12-29 | 深圳鑫宏力精密工业有限公司 | Method and device for performing voice interaction with mobile terminal through Bluetooth headset |
CN113965550A (en) * | 2021-10-15 | 2022-01-21 | 天津大学 | Intelligent interactive remote auxiliary video system |
CN113965550B (en) * | 2021-10-15 | 2023-08-18 | 天津大学 | Intelligent interactive remote auxiliary video system |
CN114866693A (en) * | 2022-04-15 | 2022-08-05 | 苏州清睿智能科技股份有限公司 | Information interaction method and device based on intelligent terminal |
CN114866693B (en) * | 2022-04-15 | 2024-01-05 | 苏州清睿智能科技股份有限公司 | Information interaction method and device based on intelligent terminal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106203052A (en) | Intelligent LED exchange method and device | |
US11380316B2 (en) | Speech interaction method and apparatus | |
CN107289949B (en) | Indoor guidance identification device and method based on face identification technology | |
CN106020448B (en) | Man-machine interaction method and system based on intelligent terminal | |
CN104346816B (en) | Depth determining method and device and electronic equipment | |
CN106214436A (en) | A kind of intelligent blind guiding system based on mobile phone terminal and blind-guiding method thereof | |
CN106503646A (en) | Multi-modal emotion identification system and method | |
WO2021136363A1 (en) | Video data processing and display methods and apparatuses, electronic device, and storage medium | |
CN112040263A (en) | Video processing method, video playing method, video processing device, video playing device, storage medium and equipment | |
CN106682090A (en) | Active interaction implementing device, active interaction implementing method and intelligent voice interaction equipment | |
CN108075892A (en) | The method, apparatus and equipment of a kind of speech processes | |
CN104410883A (en) | Mobile wearable non-contact interaction system and method | |
CN106845390A (en) | Video title generation method and device | |
US20110150283A1 (en) | Apparatus and method for providing advertising content | |
CN106462646A (en) | Control device, control method, and computer program | |
CN101216884A (en) | A method and system for face authentication | |
CN106528859A (en) | Data pushing system and method | |
CN108446320A (en) | A kind of data processing method, device and the device for data processing | |
CN105976814A (en) | Headset control method and device | |
CN107430620A (en) | Method, system and the medium of the inquiry relevant for the media content for handling with presenting | |
CN109284081B (en) | Audio output method and device and audio equipment | |
CN102110399A (en) | Method, device and system for assisting explication | |
CN107066081B (en) | Interactive control method and device of virtual reality system and virtual reality equipment | |
CN106926252A (en) | A kind of hotel's intelligent robot method of servicing | |
CN101674363A (en) | Mobile equipment and talking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161207 |