CN108366899A - A kind of image processing method, system and intelligent blind-guiding device - Google Patents

A kind of image processing method, system and intelligent blind-guiding device Download PDF

Info

Publication number
CN108366899A
CN108366899A CN201780003265.9A CN201780003265A CN108366899A CN 108366899 A CN108366899 A CN 108366899A CN 201780003265 A CN201780003265 A CN 201780003265A CN 108366899 A CN108366899 A CN 108366899A
Authority
CN
China
Prior art keywords
module
image
picture
terminal
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780003265.9A
Other languages
Chinese (zh)
Inventor
王宁
刘兆祥
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shenzhen Robotics Systems Co Ltd
Cloudminds Inc
Original Assignee
Cloudminds Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Inc filed Critical Cloudminds Inc
Publication of CN108366899A publication Critical patent/CN108366899A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A kind of image processing method, includes the following steps:At terminal, when terminal is motion state, the video image of terminal surrounding and upload are obtained;When terminal is stationary state, the picture of terminal surrounding and upload are obtained.

Description

A kind of image processing method, system and intelligent blind-guiding device
【Technical field】
This application involves intelligent blind-guiding fields, and in particular to a kind of image processing method, system and intelligent blind-guiding device.
【Background technology】
With the development of network technology, the promotion of image processing techniques and big data processing capacity and mobile terminal Universal, the research of blind person's ancillary equipment and guide software obtains more and more concerns.
Blind person is disadvantaged group, needs to obtain the help of society, helps them to improve the ability lived on one's own life, can possess Better quality of life.Therefore it is that blind person's ancillary equipment and guide software are ground that the product of article is freely gone on a journey and recognize to assisting blind The trend studied carefully.
Currently, common guide auxiliary software mostly uses greatly blind person and takes pictures manually, then detection of uploading onto the server, then to The mode of blind person's return recognition result is navigated and cognitive environment.This guide mode needs blind person oneself to operate shooting photograph Piece, because blind person does not see environmental aspect and can not promptly and accurately obtain ambient image, and the mode for obtaining image manually compares It is cumbersome, and operation by blind is extremely inconvenient.For general object detection of less demanding, substantially can with meet demand, but The occasion for needing path guide etc. more demanding, manually picture detection mode cannot be satisfied the requirement of guide real-time.
Chinese patent application the 201310711816.0th discloses a kind of intelligent guiding walking stick for blind person based on image recognition, belongs to Constructing Electronic Information Engineering field.The intelligent guiding walking stick for blind person is by hand-type cane handle, intelligent wand body, idler wheel cane point and bluetooth headset four Part forms, and human-machine interaction subsystem, road conditions recognition subsystem, navigation subsystem and idler wheel braking system are embedded in blind-guiding stick System.The intelligence wand body is located at blind-guiding stick interlude, inner hollow, embedded road conditions recognition subsystem, road conditions identification subsystem Microprocessor is contained in system;Camera and LED light device are connect with the microprocessor;The road conditions identify subsystem System is connected with the navigation subsystem of idler wheel cane point;It uses the method for image recognition processing, effectively identifies sidewalk for visually impaired people and Bus information, Assist the independent trip of blind safety.
But existing intelligent guiding walking stick for blind person is by image recognition sidewalk for visually impaired people and Bus information, assist blind safety it is independent go out Row.It is unable to assisting navigation and more fully blind person is helped to understand ambient enviroment.Avoidance effect is not good enough, also, guide equipment is such as Fruit helps blind person to understand ambient enviroment by image recognition mode, and the data volume of processing is very big, has delay response phenomenon, specific There are security risks under environment.
Therefore, the intelligent barrier avoiding technology of the prior art needs to improve.
【Invention content】
A kind of image processing method of the embodiment of the present application offer, system and intelligent blind-guiding device, to reduce to network The consumption of resource, while can realize accurate avoidance.
In a first aspect, the embodiment of the present application provides a kind of image processing method, include the following steps:At terminal, when When terminal is motion state, the video image of terminal surrounding and upload are obtained;When terminal is stationary state, terminal surrounding is obtained Picture and upload.
Second aspect, the embodiment of the present application provide a kind of image processing system, including terminal and logical with terminal wireless The cloud server of letter, the terminal include video acquiring module, picture acquisition module and sending module,
When terminal is motion state, which obtains the video image of terminal surrounding and by sending module Reach cloud server;When terminal is stationary state, which obtains the picture of terminal surrounding and by sending out Module is sent to be uploaded to cloud server.
The third aspect illustrates that the embodiment of the present application provides a kind of intelligent blind-guiding device from the application of intelligent blind-guiding device, Including:Video acquiring module, picture acquisition module and sending module,
When intelligent blind-guiding device is motion state, which obtains the video image of terminal surrounding and by sending out Module is sent to upload;When intelligent blind-guiding device is stationary state, which obtains the picture of terminal surrounding simultaneously It is uploaded by sending module.
Fourth aspect illustrates that the embodiment of the present application provides a kind of image processing method from the angle of cloud server, wraps Include following steps:
Video image is received, which is the video image around the user of acquisition when terminal is motion state;
Picture is received, which is the picture around the user of acquisition when terminal is stationary state;
Real Time Obstacle Avoiding information is generated, the video image and/or picture according to the avoidance information identify leading for user Boat state and environmental information, and generated based on the navigational state and environmental information;And
Send the avoidance information.
5th aspect, the embodiment of the present application also provides a kind of electronic equipment, including:
At least one processor;And
The memory and communication component being connect at least one processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one processor, and the instruction is by least one processor When execution, data channel is established by communication component, so that at least one processor is able to carry out method as described above.
6th aspect, the embodiment of the present application also provides a kind of computer program product, the computer program product packet The computer program being stored on non-volatile computer readable storage medium storing program for executing is included, the computer program includes program instruction, When described program instruction is computer-executed, the computer is made to execute method as described above.
The advantageous effect of the application is that image processing method, system and intelligent blind-guiding provided by the embodiments of the present application fill It sets, adaptable image is obtained according to user's state in which and environment automatically, when the user is motion state, obtain user The video image of surrounding;When the user is stationary state, the picture around user is obtained, to reduce to Internet resources Consumption.And according to the image of acquisition, Real Time Obstacle Avoiding information is generated, which includes the accessible route identified It advances and instructs with avoidance, more real-time, more convenient, more accurate avoidance information is provided for blind person and realize that the guide of safety is led Boat.
【Description of the drawings】
One or more embodiments are illustrated by the picture in corresponding attached drawing, these exemplary theorys The bright restriction not constituted to embodiment, the element with same reference numbers label is expressed as similar element in attached drawing, removes Non- to have special statement, composition does not limit the figure in attached drawing.
Fig. 1 is the system architecture diagram of image processing system provided by the embodiments of the present application;
Fig. 2 is the another system Organization Chart of image processing system provided by the embodiments of the present application;
Fig. 3 is the broad flow diagram of image processing system provided by the embodiments of the present application;
Fig. 4 is the flow chart of image processing system provided by the embodiments of the present application;
Fig. 5 is the module map of one embodiment of intelligent blind-guiding device provided by the embodiments of the present application;
Fig. 6 is the module map of another embodiment of intelligent blind-guiding device provided by the embodiments of the present application;
Fig. 7 is the intelligent blind-guiding device process chart provided by the embodiments of the present application with cloud server cooperation;
Fig. 8 is the flow chart of cloud server provided by the embodiments of the present application;
Fig. 9 is the module map of cloud server provided by the embodiments of the present application;
Figure 10 is the hardware frame figure provided by the embodiments of the present application for realization image processing method.
【Specific implementation mode】
It is with reference to the accompanying drawings and embodiments, right in order to make the object, technical solution and advantage of the application be more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the application, not For limiting the application.
Image processing method provided by the embodiments of the present application and system, intelligent blind-guiding device and cloud server obtain in real time Family state in which and environment are taken, image procossing is carried out according to state and the different types of image of environmental selection, in reduction pair While the consumption of Internet resources, navigational state and environmental information can be identified according to described image and is provided in real time for leading The avoidance information of guide of navigating interaction provides more real-time and accurate avoidance suggestion and navigation feedback without manual operation.
As shown in Figure 1, the intelligent blind-guiding device of the present embodiment, can be applied to the guide helmet as intelligent blind-guiding terminal 10, glasses for guiding blind 20 is can be applied to, blind guiding stick can also be applied to, other guide wearable devices can also be applied to.
In main hardware, which is arranged one or more CPU, must in order to meet image recognition requirement GPU is added when wanting to complete the functions such as data analysis and image recognition.
On software, the intelligent blind-guiding device can carry android operating systems either iOS operating systems or Windows Phone etc. are applied to the operating system of mobile terminal.
Cloud server 51-53 receives the image information that intelligent blind-guiding device is sent with big data processing mode.
In one embodiment, the intelligent blind-guiding apparatus system by GPS module or Beidou positioning module, gyroscope and The sensors such as camera acquire user's travel condition and environmental information, monitor user's state in which.It is in mobile shape in user Under state, start the video image around video acquisition user.In order to reduce the load of data transmission, to the video image of acquisition into The capable processing that degrades, is then encoded and compression transmission to cloud server 50,50 dyspraxia of cloud server detect scheduling algorithm It carries out image recognition and calculates accessible route, identify the navigational state and environmental information of the user, and be based on the navigational state Avoidance information is provided with environmental information, and returns to the intelligent blind-guiding device of user in real time.When user remains static, then Acquisition picture the processing that degrades is carried out to the picture of acquisition in order to reduce the load of data transmission, be then encoded with Compression transmission is to cloud server 50, and the calling object identification scheduling algorithm of cloud server 50 carries out object identification and environmental information is known Not, the object based on identification and environmental information provide avoidance information, and avoidance information is returned to the intelligent blind-guiding of user in real time Device.The intelligent blind guiding system of the present embodiment can be applied to the guide helmet, the application of mobile terminal guide or other wearable guides The products such as equipment.
Embodiment 1
Referring to Fig. 1, the present embodiment provides image processing systems.
The image processing system summarize for include multiple terminals and the cloud server 50 with the terminal wireless communication, The terminal includes that video acquiring module 32b, picture acquisition module 32a and sending module 35 should when terminal is motion state Video acquiring module 32b obtains the video image of terminal surrounding and is uploaded to cloud server 50 by sending module 35;Work as terminal For stationary state when, the picture that picture acquisition module 32a obtains terminal surrounding is simultaneously uploaded to high in the clouds by sending module 35 Server 50.
In the specific implementation, the terminal of the image processing system is several intelligent blind-guiding devices 10,20, wherein Mei Yizhi It can blind-guide device 10,20 and the wireless communication of cloud server 50.Cloud server 50 is big data processing center, can be server Cluster is formed by several server networking 51-53.The intelligent blind-guiding device can be the guide helmet 10, mobile terminal guide application or The products such as wearable glasses for guiding blind.It is illustrated by taking the guide helmet 10 as an example in the present embodiment.
Referring to FIG. 5, showing the embodiment of the blind-guide device of the image processing system, which includes inspection Survey module 31, image collection module 32, adjustment module 33, sending module 35, receiving module 36, voice interaction module 37 and life At module 39.The image collection module 32 includes video acquiring module 32b and picture acquisition module 32a.
The detection module 31 monitors user's travel condition, receives User Status parameter.The image collection module 32 is used for base In the state parameter, user's travel condition is judged, when the user is motion state, video acquiring module 32b obtains user's week The video image enclosed;When the user is stationary state, picture acquisition module 32a obtains the picture around user.The tune Mould preparation block 33 adjusts the file size of the video image and/or picture, while ensureing image recognition quality, reduces number According to treating capacity, volume of transmitted data is on the other hand reduced, reduces the consumption to Internet resources.
The sending module 35 is for transferring data to cloud server.
The adjustment module 33 includes compression module 34.The compression module 34 by acquisition to video image adjust to first Image quality criteria, first image quality criteria are less than the quality standard of the picture.
Video image under the first image quality criteria is used for obstacle identification, and the detection module 31 is according to the obstacle of identification Determine navigational state.
The adjustment module 33 by acquisition to picture adjust to the second image quality criteria.In the second picture quality Substandard image can be used in the identification of environmental information.
The generation module 39 is used to, according to the video image and/or picture for being adjusted, identify leading for the user Boat state and environmental information, and provide avoidance information based on the navigational state and environmental information.
The generation module 39 is additionally operable to determine the confidence level of the navigational state and environmental information.
The avoidance information includes avoidance traveling instruction, and the avoidances such as stopping/advance/left/right turn, which are advanced, to be instructed.The language Avoidance information is played to user, and the voice feedback of speech recognition user by sound interactive module 37 with voice mode, to complete language Sound interactive controlling.
If the confidence level of the navigational state and environmental information is less than threshold value, by sending module 35 by the video figure Picture and/or picture are sent to manual service end 60, and pass through receiving module 36 and obtain what 60 feedback of manual service end was sent Avoidance information.
Referring to FIG. 6, one embodiment of the intelligent blind guiding system is shown, in the present embodiment, in order to reduce local number According to treating capacity and facilitate concentration upgrading unified management, the generation module 42 setting is beyond the clouds on server 50.It should in the present embodiment In intelligent blind guiding system, which is additionally operable to determine the confidence level of the navigational state and environmental information.
Likewise, the avoidance information includes avoidance traveling instruction, the traveling of the avoidances such as stopping/advance/left/right turn refers to It enables.The avoidance of production is advanced and is instructed by the cloud server 50, is sent to the intelligent blind-guiding device 10 of data source, intelligent blind-guiding Avoidance information is played to user by the voice interaction module 37 of device 10 with voice mode, and the voice of speech recognition user is anti- Feedback, to complete interactive voice control.In the present embodiment, when the confidence level of the navigational state and environmental information is less than threshold value, connection To manual service end 60, the avoidance information manually sent is obtained from manual service end 60, is then forwarded to corresponding intelligent blind-guiding dress Set 10.
Referring to FIG. 3, showing the flow chart of the image processing system.The image processing method, includes the following steps:
Step 101:At terminal, when terminal is motion state, the video image of terminal surrounding and upload are obtained;
Step 102:When terminal is stationary state, the picture of terminal surrounding and upload are obtained.
In the specific implementation, it is thus necessary to determine that wear the User Status of terminal, which can be intelligent blind-guiding device.Therefore also Include the following steps:
User's travel condition is monitored, User Status parameter is received;
According to the state parameter, user's travel condition is judged, when the user is motion state, obtain regarding around user Frequency image;When the user is stationary state, the picture around user is obtained;
Adjust the video image and/or picture;
According to the video image and/or picture being adjusted, the navigational state and environmental information of the user are identified, And provide avoidance information based on the navigational state and environmental information.
Referring to FIG. 4, accurate in order to ensure avoidance information, the navigational state and environmental information each identified is set Reliability judges that the image processing method further includes:
Step 202:Determine the confidence level of the navigational state and environmental information;
Step 203:Judge whether confidence level is less than threshold value;
Step 204:When the confidence level of the navigational state and environmental information is higher than threshold value, continue to be based on the navigational state and ring Border information provides avoidance information;
Step 205:When the confidence level of the navigational state and environmental information is less than threshold value, it is connected to manual service end, is obtained The avoidance information that manual service end is sent.
Embodiment 2
In the present embodiment, the image processing method and system are applied on intelligent blind-guiding device.It is filled below from intelligent blind-guiding It sets 10 sides and illustrates technical scheme.
The technical solution angle of the consumption of Internet resources is illustrated from completing to reduce, which includes: Video acquiring module 32b, picture acquisition module 32a and sending module 35.
When intelligent blind-guiding device is motion state, video acquiring module 32b obtains the video image of terminal surrounding simultaneously It is uploaded by sending module 35;When intelligent blind-guiding device is stationary state, picture acquisition module 32a obtains the figure of terminal surrounding Picture is simultaneously uploaded by sending module 35.
The intelligent blind-guiding device further includes adjustment module 33 and detection module 31.The adjustment module 33 includes compression module 34.The compression module 34 by acquisition to video image adjust to the first image quality criteria, first image quality criteria Less than the quality standard of the picture.Video image under the first image quality criteria is used for obstacle identification, the detection mould Block 31 determines navigational state according to the obstacle of identification.
The adjustment module 33 by acquisition to picture adjust to the second image quality criteria.In the second picture quality Substandard image can be used in the identification of environmental information.
There are many mode that the adjustment module 33 adjusts image, in addition to compression image reduces image real time transfer amount.The adjustment Module 33 may also include single channel greyscale image transitions module either image binaryzation module or extraction edge image module or Person is with the combination of upper module, to complete the adjustment and processing of image.
Referring to FIG. 5, showing an embodiment of the intelligent blind-guiding device, in the present embodiment, generation module is arranged at this On intelligent blind-guiding device.
The intelligent blind-guiding device includes detection module 31, image collection module 32, adjustment module 33, sending module 35, connects Receive module 36, voice interaction module 37 and generation module 39.The image collection module 32 include video acquiring module 32b with And picture acquisition module 32a.
The detection module 31 monitors user's travel condition, receives User Status parameter.
The image collection module 32 is used to, according to the state parameter, judge user's travel condition, when the user is movement shape When state, video acquiring module 32b obtains the video image around user;When the user is stationary state, picture obtains mould Block 32a obtains the picture around user.
The generation module 39 is used to, according to the video image and/or picture for being adjusted, identify leading for the user Boat state and environmental information, and provide avoidance information based on the navigational state and environmental information.
The generation module 39 is additionally operable to determine the confidence level of the navigational state and environmental information.
The avoidance information includes avoidance traveling instruction, and the avoidances such as stopping/advance/left/right turn, which are advanced, to be instructed.The language Avoidance information is played to user, and the voice feedback of speech recognition user by sound interactive module 37 with voice mode, to complete language Sound interactive controlling.
The receiving module 36 receives the data from cloud server.
The detection module 31 receives User Status parameter for monitoring user's travel condition.The implementation of the detection module 31 There are many mode, are illustrated one by one below.
In the first embodiment of the detection module 31, the detection module 31 intelligent blind-guiding device system start after always In work of detection and examination state, to judge and switch image acquisition mode in time.The detection module 31 monitors inertial sensor, GPS The sensor signals such as module, the inertial sensor are gyroscope.When user is kept in motion and last for several seconds, the present embodiment In, duration value is 5 seconds, then starts camera camera shooting and obtain video image;If monitoring the biographies such as inertial sensor, GPS module Sensor signal judges that user enters stop mode and last for several seconds by motor pattern, and in the present embodiment, duration value is 5 seconds, Then start camera take pictures obtain picture.
In the second embodiment of the detection module 31, User Status is monitored by imaging sensor.For example pass through figure As sensor acquisition environment picture, the difference of inter frame image is calculated based on Algorithm Analysis, to estimate the motion state of user:
As an embodiment of movement image analysis, which includes the light for monitoring user's travel condition Flow mechanism module.By calculating light stream, that is, pattern further speed in time varying image estimates the movement velocity of user, when Movement velocity is more than given threshold, such as 1.5 kilometers/hour, and last for several seconds, and in the present embodiment, duration value is 5 seconds, then Think that user is kept in motion, guide, which is applied, enters pathfinding guide pattern, and system starts video acquisition and obtains around user Video image;If movement velocity is less than the threshold value of setting, and last for several seconds, then it is assumed that user remains static, and needs intelligence Can blind-guide device enter Context aware pattern to identify object, then start camera take pictures obtain picture.
As another embodiment of movement image analysis, which includes being regarded for monitoring user's travel condition Feel navigation module.Positioned in real time by vision modeling (Visual Simultaneous Localization and Mapping, VSLAM it) realizes detection, is known as vision guided navigation module in the present embodiment.By calculating interframe displacement and the attitudes vibration of user, into And the movement velocity and angular speed of user is obtained, if movement velocity and angular speed are more than given threshold, and last for several seconds, for example hold Continuous time value is 5 seconds, then it is assumed that user is kept in motion, and guide, which is applied, enters pathfinding guide pattern, and system starts video and adopts Collection obtains the video image around user;If movement velocity and angular speed are less than given threshold, and last for several seconds, such as lasting Time value is 5 seconds, then it is assumed that user remains static, and intelligent blind-guiding device is needed to enter Context aware pattern to identify object Body, then start camera take pictures obtain picture.
In the 3rd embodiment of the detection module 31, which includes for monitoring user's travel condition, leading to Cross sensor combinations monitoring User Status.
The sensor combinations are inertial sensor, electronic compass sensor, one of GPS module.
Citing passes through inertial sensor, electronic compass sensor, GPS module and their combinational estimation user below Motion state.For example measure the linear acceleration and angular speed of user in real time by inertial sensor, the fortune to user of estimation Dynamic speed and angular speed.The absolute orientation of user is measured by electronic compass sensor simultaneously, the variation of binding time can be indirect Estimate user's velocity of rotation.If user movement speed and angular speed are more than given threshold, and last for several seconds, such as duration value It it is 5 seconds, then it is assumed that user is kept in motion;If movement velocity and angular speed are less than given threshold, and last for several seconds, then recognize It remains static for user.
The absolute position of user is for another example measured by GPS module, the variation of binding time can estimate the movement of user indirectly Speed and turning velocity.If user movement speed and angular speed are more than given threshold, and last for several seconds, for example duration value is 5 seconds, then it is assumed that user is kept in motion;If movement velocity and angular speed are less than given threshold, and last for several seconds, then it is assumed that User remains static.
For another example above-mentioned three kinds of sensors are combined, the more accurate movement velocity of user and angular speed can be obtained.If with Family movement velocity and angular speed are more than given threshold, and last for several seconds, such as duration value are 5 seconds, then it is assumed that user is in Motion state;If movement velocity and angular speed are less than given threshold, and last for several seconds, then it is assumed that user remains static.
In the fourth embodiment of the detection module 31, positioned in real time in order to which more acurrate detection User Status is combined with vision Modeling and sensor combinations.The detection module 31 includes the vision guided navigation module and sensing for accurately monitoring user's travel condition Device combines.More accurate movement velocity and angular speed in order to obtain, can be by Kalman filter by the estimation based on image As a result the estimated result with inertial sensor, electronic compass sensor, GPS module is merged, and final output is more accurately used Family state.
Referring to FIG. 6, in another the present embodiment of intelligent blind-guiding device 10, in order to reduce intelligent blind-guiding device local number According to treating capacity and facilitate concentration upgrading unified management, generation module setting is beyond the clouds on server 50.
In this embodiment, the intelligent blind-guiding device include detection module 31, image collection module 32, adjustment module 33, Sending module 35, receiving module 36 and voice interaction module 37.The image collection module 32 includes video acquiring module 32b And picture acquisition module 32a.
In the present embodiment, which is generated by cloud server 50, wherein the video figure according to the avoidance information Picture and/or picture identify the navigational state and environmental information of the user, and are given birth to based on the navigational state and environmental information At.
The adjustment module 33 adjusts the document size of the video image and/or picture.
The adjustment module 33 further includes compression module 34.The compression module 34 compress the adjustment after video image and/or Picture is retransmited to cloud server 50.The adjustment module 33 is in the file for adjusting the video image and/or picture When size, may be used single channel greyscale image transitions module either image binaryzation module or extraction edge image module or Person is with the combination of upper module.
Referring to FIG. 7, wherein, the present embodiment also provides the implementation method of the intelligent blind-guiding device side.The intelligent blind-guiding The method of device side includes the following steps:
Step 301:User's travel condition is monitored, User Status parameter is received;
Step 302:According to the state parameter, user's travel condition is judged, when the user is motion state, obtain user The video image of surrounding;When the user is stationary state, the picture around user is obtained;
Step 303:It adjusts and sends the video image and/or picture;
Step 304:Receive avoidance information, wherein the video image and/or picture identification according to the avoidance information The navigational state and environmental information of the user, and generated based on the navigational state and environmental information.
Wherein, avoidance information is received, can receive local generation module 39 to calculate the avoidance information judged, can also be Avoidance information is received from the cloud server 50.
Wherein, the method for adjusting the document size of the video image and/or picture includes that single channel gray level image turns It changes either image binaryzation or extracts the combination of edge image or above method.
The intelligent blind-guiding device is complete with voice mode and the user by voice interaction module 37 after receiving avoidance information At interactive controlling.
Embodiment 3
Referring to FIG. 8, the present embodiment illustrates technical scheme from 50 side of cloud server.
In following embodiment, it is illustrated so that cloud server 62 completes the generation of avoidance information as an example.The cloud service Device 50 includes receiving module 41, generation module 42, sending module 44 and voice interaction module 45.The generation module 42 includes figure As identification module 43.
In completing the embodiment that avoidance information generates by intelligent blind-guiding device 10, cloud server 62 does not include generating mould Block 42 and picture recognition module 43.
The receiving module 41 receives video image and/or picture from intelligent blind-guiding device 10.The video image User's travel condition is monitored for blind-guide device and receives User Status parameter, and user's travel condition is judged according to the state parameter, when The video image around user that the user obtains when being motion state;It in the user is static that the picture, which is blind-guide device, The picture around user obtained when state.
In order to reduce the data transmission between intelligent blind-guiding device and cloud server, system effectiveness and corresponding speed are improved The document size of degree, the video image and/or picture is first adjusted to fair-sized.In specific implementation, image is adjusted May be used when size single channel greyscale image transitions module either image binaryzation module or extraction edge image module or Person is with the combination of upper module.
Video image and/or picture after the adjustment also need the data volume that transmission is further reduced through overcompression, then It is sent to cloud server 50.
The avoidance information is sent back source intelligence by the cloud server 50 after determining the avoidance information, by sending module 44 It can blind-guide device.
In the present embodiment, which is also provided with voice interaction module 45.The voice interaction module 45 is beyond the clouds When server 50 is chosen as manual service end 60, the audio message for playing user's transmission and acquisition manual service end 60 Speech answering.It is sent to source intelligent blind-guiding device 10 by sending module 44 again to play out, to realize that real-time voice is linked up.
Referring to FIG. 8, showing the method flow diagram for realizing intelligent blind-guiding from cloud server side.The intelligent blind-guiding side Method includes the following steps:
Step 401:Video image is received, which is when terminal is motion state, around the user of acquisition Video image, which is that blind-guide device monitors user's travel condition reception User Status parameter, according to the state parameter Judge user's travel condition, the video image around user obtained when the user is motion state;
Step 402:Picture is received, which is when terminal is stationary state, around the user of acquisition Picture;
Step 403:Real Time Obstacle Avoiding information is generated, the video image and/or picture according to the avoidance information, identification The navigational state and environmental information of user, and generated based on the navigational state and environmental information;And
Step 404:Send the avoidance information.
Further include the determining navigational state and environment letter during identifying the navigational state and environmental information of the user The confidence level of breath.
When the confidence level of the navigational state and environmental information is less than threshold value, it is connected to manual service end 60, obtains artificial clothes The avoidance information that business end 60 is sent.
Image processing method provided by the embodiments of the present application and system and intelligent blind-guiding device, the shape residing for user State and environment automatically select adaptable image acquiring device to shoot image, when the user is motion state, obtain user The video image of surrounding;When the user is stationary state, the picture around user is obtained, to reduce to Internet resources Consumption.Also, image processing method provided by the embodiments of the present application and system and intelligent blind-guiding device are according to the figure of acquisition Picture generates Real Time Obstacle Avoiding information, which includes that the accessible route identified and avoidance are advanced and instructed, and is blind person More real-time, more convenient, more accurate avoidance information is provided and realizes the guide navigation of safety.Further, the inspection of the application The current travel condition of user can quickly and accurately be detected by surveying module, and accurate navigation state and environment letter are provided for the interaction of real-time guide Breath.In addition, the technical program carries out rational size adjustment before identifying image information, to image, mitigate at system data Reason burden.Image is compressed before image data is transmitted, to reduce data transfer load.Meanwhile technical scheme leads this Confidence level is arranged in boat state and environmental information, to improve the accuracy of guide avoidance decision.
Embodiment 4
Figure 10 is the hardware architecture diagram of the electronic equipment 600 of image processing method provided by the embodiments of the present application, such as Shown in Figure 10, which includes:
One or more processors 610, memory 620, one or more graphics processor (GPU) 630 and communication Component 650, in Figure 10 by taking a processor 610 and a graphics processor 630 as an example.The memory 620 is stored with can be by this The instruction that at least one processor 610 executes is built when the instruction is executed by least one processor by communication component 650 Vertical data channel, so that at least one processor is able to carry out the image processing method.
Processor 610, memory 620, graphics processor 630 and communication component 650 can by bus or other Mode connects, in Figure 10 for being connected by bus.
Memory 620 is used as a kind of non-volatile computer readable storage medium storing program for executing, can be used for storing non-volatile software journey Sequence, non-volatile computer executable program and module, such as the corresponding program of image processing method in the embodiment of the present application Instruction/module is (for example, attached detection module shown in fig. 5 31, picture recognition module 32, adjustment module 33, generation module 39, attached Generation module 45, picture recognition module 27 and voice interaction module 45 shown in fig. 6).Processor 610 is stored in by operation Non-volatile software program, instruction and module in memory 620, to the various function application and number of execute server According to processing, that is, realize the image processing method in above method embodiment.
Memory 620 may include storing program area and storage data field, wherein storing program area can store operation system System, the required application program of at least one function;Storage data field can store according to terminal, for example intelligent blind-guiding device makes With the data etc. created.In addition, memory 620 may include high-speed random access memory, can also include non-volatile Memory, for example, at least a disk memory, flush memory device or other non-volatile solid state memory parts.In some realities It applies in example, it includes the memory remotely located relative to processor 610 that memory 620 is optional, these remote memories can lead to Network connection is crossed to robot interactive electronic equipment.The example of above-mentioned network includes but not limited to internet, intranet, office Domain net, mobile radio communication and combinations thereof.
One or more of modules are stored in the memory 620, when by one or more of processors When 610 execution, the image processing method in above-mentioned any means embodiment is executed, for example, executing the side in Fig. 3 described above Method step 101 executes the method and step 202 in Fig. 4 described above to step 205 and executes above description to step 102 Fig. 7 in method and step 301 to step 304, realize attached detection module 31 shown in fig. 5, picture recognition module 32, adjustment mould Block 33, generation module 39, the work(of attached generation module 45, picture recognition module 27 and voice interaction module 45 shown in fig. 6 etc. Energy.
The said goods can perform the method that the embodiment of the present application is provided, and has the corresponding function module of execution method and has Beneficial effect.The not technical detail of detailed description in the present embodiment, reference can be made to the method that the embodiment of the present application is provided.
The embodiment of the present application provides a kind of non-volatile computer readable storage medium storing program for executing, the computer-readable storage medium Matter is stored with computer executable instructions, which is executed by one or more processors, for example, execute with Method and step 101 in Fig. 3 of upper description executes method and step 202 in Fig. 4 described above to step 205 to step 102 And the method and step 301 in execution Fig. 7 described above realizes attached detection module 31 shown in fig. 5, image to step 304 Identification module 32, adjustment module 33, generation module 39, attached generation module 45, picture recognition module 27 and voice shown in fig. 6 The function of interactive module 45 etc..
The apparatus embodiments described above are merely exemplary, wherein the unit illustrated as separating component can It is physically separated with being or may not be, the component shown as unit may or may not be physics list Member, you can be located at a place, or may be distributed over multiple network units.It can be selected according to the actual needs In some or all of module achieve the purpose of the solution of this embodiment.
Through the above description of the embodiments, those of ordinary skill in the art can be understood that each embodiment The mode of general hardware platform can be added to realize by software, naturally it is also possible to pass through hardware.Those of ordinary skill in the art can With understand all or part of flow realized in above-described embodiment method be can be instructed by computer program it is relevant hard Part is completed, and the program can be stored in a computer read/write memory medium, the program is when being executed, it may include as above State the flow of the embodiment of each method.Wherein, the storage medium can be magnetic disc, CD, read-only memory (Read- Only Memory, ROM) or random access memory (Random Access Memory, RAM) etc..
Finally it should be noted that:Above example is only to illustrate the technical solution of the application, rather than its limitations;At this It under the thinking of application, can also be combined between the technical characteristic in above example or different embodiment, step can be with It is realized with random order, and there are many other variations of the different aspect of the application as described above, for simplicity, they do not have Have and is provided in details;Although the application is described in detail with reference to the foregoing embodiments, the ordinary skill people of this field Member should understand that:It still can be with technical scheme described in the above embodiments is modified, or to which part skill Art feature carries out equivalent replacement;And these modifications or replacements, each reality of the application that it does not separate the essence of the corresponding technical solution Apply the range of a technical solution.

Claims (21)

1. a kind of image processing method, which is characterized in that include the following steps:At terminal,
When terminal is motion state, the video image of terminal surrounding and upload are obtained;When terminal is stationary state, obtain eventually Picture around holding and upload.
2. according to the method described in claim 1, it is characterized in that, the video image for obtaining terminal surrounding and upload, packet It includes:
By acquisition to video image adjust to the first image quality criteria, described first image quality standard is less than the figure The quality standard of picture.
3. according to the method described in claim 2, it is characterized in that, the video image under the first image quality criteria can be used Navigational state is determined in obstacle identification, and according to the obstacle of identification.
4. according to the method described in claim 1, it is characterized in that, the picture for obtaining terminal surrounding and upload, packet It includes:
By acquisition to picture adjust to the second image quality criteria;Image under the second image quality criteria can Identification for environmental information.
5. according to the method described in claim 2, it is characterized in that, it is described by acquisition to video image adjust to the first figure As quality standard, including:
When determining that current environment meets the environmental condition of setting, by acquisition to video image adjust to the first picture quality Standard, wherein the environmental condition of the setting includes:In not dangerous environment, when current environment is crossing, front ring is worked as in determination Border is hazardous environment.
6. method according to claim 3 or 4, which is characterized in that further include the determining navigational state and environmental information Confidence level.
7. according to the method described in claim 6, it is characterized in that, the confidence level of the navigational state and environmental information is less than threshold When value, it is connected to manual service end, obtains the avoidance information that manual service end is sent.
8. a kind of image processing system, which is characterized in that the cloud server including terminal and with terminal wireless communication, the end End includes video acquiring module, picture acquisition module and sending module,
When terminal is motion state, the video acquiring module obtains the video image of terminal surrounding and is uploaded by sending module To cloud server;When terminal is stationary state, the picture acquisition module obtains the picture of terminal surrounding and by sending out Module is sent to be uploaded to cloud server.
9. system according to claim 8, which is characterized in that the terminal includes adjustment module, the adjustment module packet Include compression module, the compression module be used for by obtain to video image adjust to the first image quality criteria, described the One image quality criteria is less than the quality standard of the picture.
10. system according to claim 9, which is characterized in that the terminal further includes detection module, in the first image matter Substandard video image is measured for obstacle identification, the detection module determines navigational state according to the obstacle of identification.
11. system according to claim 8, which is characterized in that the terminal includes adjustment module, and the adjustment module is used In by obtain to picture adjust to the second image quality criteria;Image under the second image quality criteria can be used In the identification of environmental information.
12. system according to claim 9, which is characterized in that the compression module is additionally operable to determining current environment symbol When closing the environmental condition of setting, by acquisition to video image adjust to the first image quality criteria, the environment of the setting Condition includes:In not dangerous environment, then belong to hazardous environment when current environment is crossing.
13. the system according to claim 9 or 11, which is characterized in that further include generation module, the generation module is used for Determine the confidence level of the navigational state and environmental information.
14. system according to claim 13, which is characterized in that further include the manual service for being connected to cloud server When the confidence level of end, the navigational state and environmental information is less than threshold value, avoidance information is obtained by the manual service end.
15. a kind of intelligent blind-guiding device, which is characterized in that including:Video acquiring module, picture acquisition module and transmission mould Block,
When intelligent blind-guiding device is motion state, the video acquiring module obtains the video image of terminal surrounding and by sending Module uploads;When intelligent blind-guiding device is stationary state, the picture acquisition module obtains the picture of terminal surrounding simultaneously It is uploaded by sending module.
16. intelligent blind-guiding device according to claim 15, which is characterized in that further include adjusting module and detection module, The adjustment module includes compression module, the compression module be used for by obtain to video image adjust to the first image matter Amount standard, described first image quality standard is less than the quality standard of the picture, under the first image quality criteria Video image is used for obstacle identification, and the detection module determines navigational state according to the obstacle of identification;The adjustment module is also used In by obtain to picture adjust to the second image quality criteria;Image under the second image quality criteria can be used In the identification of environmental information, the adjustment module further include single channel greyscale image transitions module or image binaryzation module or Person extracts edge image module or the combination with upper module.
17. intelligent blind-guiding device according to claim 15, which is characterized in that
Further include voice interaction module, for after receiving avoidance information, being interacted with the user by voice.
18. a kind of image processing method, which is characterized in that include the following steps:
Video image is received, the video image is the video image around the user of acquisition when terminal is motion state;
Picture is received, the picture is the picture around the user of acquisition when terminal is stationary state;
Real Time Obstacle Avoiding information is generated, the avoidance information is to identify leading for user according to the video image and/or picture Boat state and environmental information, and generated based on the navigational state and environmental information;And
Send the avoidance information.
19. according to the method for claim 18, which is characterized in that the navigational state and environment letter of the identification user Breath further includes the confidence level of the determining navigational state and environmental information, and the avoidance information includes avoidance traveling instruction, described When the confidence level of navigational state and environmental information is less than threshold value, it is connected to manual service end, obtains keeping away for manual service end transmission Hinder information.
20. a kind of electronic equipment, wherein including:
At least one processor;And
The memory and communication component being connect at least one processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one processor, and described instruction is by least one place When managing device execution, data channel is established by communication component, so that at least one processor is able to carry out claim 1-7 Any one of them method.
21. a kind of computer program product, wherein the computer program product is readable including being stored in non-volatile computer Computer program on storage medium, the computer program include program instruction, when described program instruction is computer-executed When, so that the computer perform claim is required 1-7 any one of them methods.
CN201780003265.9A 2017-08-02 2017-08-02 A kind of image processing method, system and intelligent blind-guiding device Pending CN108366899A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/095676 WO2019024010A1 (en) 2017-08-02 2017-08-02 Image processing method and system, and intelligent blind aid device

Publications (1)

Publication Number Publication Date
CN108366899A true CN108366899A (en) 2018-08-03

Family

ID=63011240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780003265.9A Pending CN108366899A (en) 2017-08-02 2017-08-02 A kind of image processing method, system and intelligent blind-guiding device

Country Status (2)

Country Link
CN (1) CN108366899A (en)
WO (1) WO2019024010A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110812142A (en) * 2019-10-18 2020-02-21 湖南红太阳新能源科技有限公司 Intelligent blind guiding system and method
CN111427343A (en) * 2020-01-16 2020-07-17 黑龙江科技大学 Intelligent blind guiding method and intelligent wheelchair

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5973618A (en) * 1996-09-25 1999-10-26 Ellis; Christ G. Intelligent walking stick
CN101485199A (en) * 2006-06-30 2009-07-15 摩托罗拉公司 Methods and devices for video correction of still camera motion
CN101986673A (en) * 2010-09-03 2011-03-16 浙江大学 Intelligent mobile phone blind-guiding device and blind-guiding method
CN102123194A (en) * 2010-10-15 2011-07-13 张哲颖 Method for optimizing mobile navigation and man-machine interaction functions by using augmented reality technology
CN102293709A (en) * 2011-06-10 2011-12-28 深圳典邦科技有限公司 Visible blindman guiding method and intelligent blindman guiding device thereof
CN202409427U (en) * 2011-12-01 2012-09-05 大连海事大学 Portable intelligent electronic blind guide instrument
CN103312899A (en) * 2013-06-20 2013-09-18 张家港保税区润桐电子技术研发有限公司 Smart phone with blind guide function
CN103637900A (en) * 2013-12-20 2014-03-19 北京航空航天大学 Intelligent blind guiding stick based on image identification
CN105227810A (en) * 2015-06-01 2016-01-06 西北大学 A kind of automatic focus helmet video camera based on BIBAVR algorithm
CN105591882A (en) * 2015-12-10 2016-05-18 北京中科汇联科技股份有限公司 Method and system for mixed customer services of intelligent robots and human beings
CN205622764U (en) * 2016-01-06 2016-10-05 国网重庆市电力公司电力科学研究院 Video monitoring equipment
KR20170040974A (en) * 2015-10-06 2017-04-14 주식회사 신성테크 System for protecting blind person using jacket for protecting upper body

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201899668U (en) * 2010-09-03 2011-07-20 浙江大学 Intelligent mobile phone blind guide device
CN106618981A (en) * 2016-12-06 2017-05-10 天津瑞世通科技有限公司 Blind living assistant system based on Internet of things and blind crutch with the system
CN106859929B (en) * 2017-01-25 2019-11-22 上海集成电路研发中心有限公司 A kind of Multifunctional blind person guiding instrument based on binocular vision

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5973618A (en) * 1996-09-25 1999-10-26 Ellis; Christ G. Intelligent walking stick
CN101485199A (en) * 2006-06-30 2009-07-15 摩托罗拉公司 Methods and devices for video correction of still camera motion
CN101986673A (en) * 2010-09-03 2011-03-16 浙江大学 Intelligent mobile phone blind-guiding device and blind-guiding method
CN102123194A (en) * 2010-10-15 2011-07-13 张哲颖 Method for optimizing mobile navigation and man-machine interaction functions by using augmented reality technology
CN102293709A (en) * 2011-06-10 2011-12-28 深圳典邦科技有限公司 Visible blindman guiding method and intelligent blindman guiding device thereof
CN202409427U (en) * 2011-12-01 2012-09-05 大连海事大学 Portable intelligent electronic blind guide instrument
CN103312899A (en) * 2013-06-20 2013-09-18 张家港保税区润桐电子技术研发有限公司 Smart phone with blind guide function
CN103637900A (en) * 2013-12-20 2014-03-19 北京航空航天大学 Intelligent blind guiding stick based on image identification
CN105227810A (en) * 2015-06-01 2016-01-06 西北大学 A kind of automatic focus helmet video camera based on BIBAVR algorithm
KR20170040974A (en) * 2015-10-06 2017-04-14 주식회사 신성테크 System for protecting blind person using jacket for protecting upper body
CN105591882A (en) * 2015-12-10 2016-05-18 北京中科汇联科技股份有限公司 Method and system for mixed customer services of intelligent robots and human beings
CN205622764U (en) * 2016-01-06 2016-10-05 国网重庆市电力公司电力科学研究院 Video monitoring equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110812142A (en) * 2019-10-18 2020-02-21 湖南红太阳新能源科技有限公司 Intelligent blind guiding system and method
CN111427343A (en) * 2020-01-16 2020-07-17 黑龙江科技大学 Intelligent blind guiding method and intelligent wheelchair

Also Published As

Publication number Publication date
WO2019024010A1 (en) 2019-02-07

Similar Documents

Publication Publication Date Title
US11295458B2 (en) Object tracking by an unmanned aerial vehicle using visual sensors
US11755041B2 (en) Objective-based control of an autonomous unmanned aerial vehicle
US9345967B2 (en) Method, device, and system for interacting with a virtual character in smart terminal
WO2016031105A1 (en) Information-processing device, information processing method, and program
US9569898B2 (en) Wearable display system that displays a guide for a user performing a workout
CN103625477B (en) Run the method and system of vehicle
CN112307642B (en) Data processing method, device, system, computer equipment and storage medium
JP2021513714A (en) Aircraft smart landing
US11378413B1 (en) Augmented navigational control for autonomous vehicles
US10571289B2 (en) Information processing device, information processing method, and program
EP4194811A1 (en) Robust vision-inertial pedestrian tracking with heading auto-alignment
CN109634263A (en) Based on data synchronous automatic Pilot method, terminal and readable storage medium storing program for executing
US20170352226A1 (en) Information processing device, information processing method, and program
CN108957505A (en) A kind of localization method, positioning system and portable intelligent wearable device
US20200341273A1 (en) Method, System and Apparatus for Augmented Reality
US11918883B2 (en) Electronic device for providing feedback for specific movement using machine learning model and operating method thereof
US10881937B2 (en) Image processing apparatus, analysis system, and method for processing images
CN110751336B (en) Obstacle avoidance method and obstacle avoidance device of unmanned carrier and unmanned carrier
US20230252689A1 (en) Map driven augmented reality
WO2020114214A1 (en) Blind guiding method and apparatus, storage medium and electronic device
CN108366899A (en) A kind of image processing method, system and intelligent blind-guiding device
CN108108018A (en) Commanding and training method, equipment and system based on virtual reality
CN115225815A (en) Target intelligent tracking shooting method, server, shooting system, equipment and medium
CN113343457A (en) Automatic driving simulation test method, device, equipment and storage medium
US11656089B2 (en) Map driven augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180803