CN112434626A - Method and device for monitoring position of vision-impaired person in scenic spot - Google Patents

Method and device for monitoring position of vision-impaired person in scenic spot Download PDF

Info

Publication number
CN112434626A
CN112434626A CN202011375733.5A CN202011375733A CN112434626A CN 112434626 A CN112434626 A CN 112434626A CN 202011375733 A CN202011375733 A CN 202011375733A CN 112434626 A CN112434626 A CN 112434626A
Authority
CN
China
Prior art keywords
vision
person
impaired person
impaired
indication information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011375733.5A
Other languages
Chinese (zh)
Other versions
CN112434626B (en
Inventor
吴博琦
时准
张恒煦
迟耀丹
赵春蕾
王欢
王超
赵阳
赵春雷
杨小天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Jianzhu University
Original Assignee
Jilin Jianzhu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Jianzhu University filed Critical Jilin Jianzhu University
Priority to CN202011375733.5A priority Critical patent/CN112434626B/en
Publication of CN112434626A publication Critical patent/CN112434626A/en
Application granted granted Critical
Publication of CN112434626B publication Critical patent/CN112434626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/10Information sensed or collected by the things relating to the environment, e.g. temperature; relating to location
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/40Information sensed or collected by the things relating to personal data, e.g. biometric data, records or preferences
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/60Positioning; Navigation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Environmental & Geological Engineering (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Toxicology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a position monitoring method and device for a vision disorder person in a scenic spot, when a wearable device of the vision disorder person does not receive a position reported by the wearable device at a reporting moment when the wearable device needs to report the position, a service device firstly indicates a service person of the vision disorder person to feed back the position of the vision disorder person, then if the service person of the vision disorder person feeds back the position of the vision disorder person, a first position of the vision disorder person is estimated according to the positions reported by the wearable device at least twice, then video data collected by an image acquisition device near the first position is obtained, a second position of the vision disorder person is determined according to the video data, and finally the second position is informed to the service person of the vision disorder person, so that the service person of the vision disorder person can find the vision disorder person in time to ensure the safety of the vision disorder person.

Description

Method and device for monitoring position of vision-impaired person in scenic spot
Technical Field
The application relates to the technical field of intelligent tourism, in particular to a method and a device for monitoring the position of a vision-impaired person in a scenic spot.
Background
With the increasing improvement of living standard and the acceleration of life rhythm, the demand of entertainment consumption is increased unprecedentedly, and tourism becomes the first-choice entertainment item of people. In recent years, intelligent travel projects are being introduced, and visually impaired people are not negligible travel service objects. Because the visually impaired people have inconvenient movement, in order to ensure the safety of the visually impaired people during the playing process, it is necessary to monitor the positions of the visually impaired people.
Disclosure of Invention
The embodiment of the application provides a method and a device for monitoring the position of a vision-impaired person in a scenic spot.
In a first aspect, an embodiment of the present application provides a method for monitoring a position of a vision-impaired person in a scenic spot, which is applied to a service device in an intelligent tourism system, where the intelligent tourism system further includes a wearable device of the vision-impaired person, a first mobile terminal of a vision-impaired person service staff, and a plurality of image acquisition devices disposed in the scenic spot; the wearable device is used for periodically reporting the position of the vision-impaired person under the condition of free movement in the scenic spot; the method comprises the following steps:
when the position reported by the wearable device is not received at a first moment, sending first indication information to the first mobile terminal, wherein the first indication information is used for indicating the position of the visually impaired person to be fed back, and the first moment is the reporting moment of the position needing to be reported by the wearable device;
receiving first feedback information sent by the first mobile terminal, wherein the first feedback information is used for informing a person who does not find the vision disorder;
estimating a first position of the vision disorder person according to positions reported by the wearable device at least twice before;
acquiring first video data acquired by a target image acquisition device, wherein the distance between the position of the target image acquisition device and the first position is smaller than or equal to a first threshold, the plurality of image acquisition devices comprise the target acquisition device, and the middle node of the first video data is the first moment;
determining a second location of the visually impaired from the first video data;
and sending second indication information to the first mobile terminal, wherein the second indication information carries the second position, and the second indication information is used for indicating that the vision disorder person is searched according to the second position.
In a second aspect, an embodiment of the present application provides a position monitoring device for a vision-impaired person in a scenic spot, which is applied to a service device in an intelligent tourism system, where the intelligent tourism system further includes a wearable device for the vision-impaired person, a first mobile terminal for a vision-impaired person service staff, and a plurality of image acquisition devices disposed in the scenic spot; the wearable device is used for periodically reporting the position of the vision-impaired person under the condition of free movement in the scenic spot; the device comprises:
a sending unit, configured to send first indication information to the first mobile terminal when the position reported by the wearable device is not received at a first time, where the first indication information is used to indicate that the position of the visually impaired person is fed back, and the first time is a reporting time at which the wearable device needs to report the position;
a receiving unit, configured to receive first feedback information sent by the first mobile terminal, where the first feedback information is used to inform a person who does not find the vision disorder;
the pre-estimation unit is used for pre-estimating the first position of the vision disorder person according to the positions reported by the wearable device at least twice before;
the device comprises an acquisition unit, a first time calculation unit and a second time calculation unit, wherein the acquisition unit is used for acquiring first video data acquired by a target image acquisition device, the distance between the position of the target image acquisition device and the first position is smaller than or equal to a first threshold, the plurality of image acquisition devices comprise the target acquisition device, and the middle node of the first video data is the first time;
a determining unit for determining a second position of the visually impaired person from the first video data;
the sending unit is further configured to send second indication information to the first mobile terminal, where the second indication information carries the second location, and the second indication information is used to indicate that the vision-impaired person is to be found according to the second location.
In a third aspect, an embodiment of the present application provides a service device, including a processor, a memory, a transceiver, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods of the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the application, when the wearable device does not receive the position reported by the wearable device at the reporting time when the position needs to be reported by the wearable device, the service device indicates the service person with the vision disorder to feed back the position of the person with the vision disorder first, then if the service person with the vision disorder feeds back the position where the person with the vision disorder is not found, estimates the first position of the person with the vision disorder according to the positions reported at least twice before the wearable device, then obtains the video data collected by the image collection device near the first position, determines the second position of the person with the vision disorder according to the video data, and finally informs the service person with the vision disorder of the second position, so that the service person with the vision disorder can find the person with the vision disorder in time to ensure the safety of the person with the.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic view of an intelligent travel system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for monitoring the position of a vision-impaired person in a scenic spot according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a service device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a device for monitoring the position of a visually impaired person in a scenic spot according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic view of an intelligent tourism system according to an embodiment of the present disclosure, which includes a service device, a wearing device of a visually impaired person, a first mobile terminal of a visually impaired person service staff, a second mobile terminal of a scenic spot staff, and a plurality of image capturing devices disposed in the scenic spot. The service equipment can communicate with the wearable equipment, the first mobile terminal, the second mobile terminal and the plurality of image acquisition devices.
Wherein, wearing equipment is for example intelligent stick, intelligent wrist-watch etc..
The wearable device is used for periodically reporting the position of the vision-impaired person under the condition of freely moving in the scenic spot. The periodic reporting locations may be, for example, reporting locations every 1 minute, reporting locations every 5 minutes, and so on.
The wearable device determines that the vision-impaired person freely moves in the scenic spot under the condition of receiving first target information sent by a service device;
the first target information is sent by the service equipment under the condition of receiving second target information sent by a first mobile terminal; the second target information is a position monitoring request, and the position monitoring request is used for requesting to monitor the position of the vision-impaired person; the first target information is a position reporting request, and the position reporting request is used for requesting to report the position of the vision disorder person.
Wherein the service device is a device that provides computing services. The service device can be a background server or other devices.
The working principle of the intelligent tourism system of the embodiment of the application is as follows: when the position reported by the wearable device is not received at the reporting moment when the position needs to be reported by the wearable device, the service device firstly indicates the position of the vision disorder person to be fed back by the vision disorder person service personnel, then if the vision disorder person is not found by the vision disorder person service personnel in the feedback mode, the first position of the vision disorder person is estimated according to the positions reported at least twice before the wearable device, then video data collected by an image acquisition device near the first position is obtained, the second position of the vision disorder person is determined according to the video data, and finally the second position is informed to the vision disorder person service personnel, so that the vision disorder person service personnel can find the vision disorder person in time, and the safety of the vision disorder person is ensured.
Referring to fig. 2, fig. 2 is a schematic flow chart of a method for monitoring the position of a vision-impaired person in a scenic spot, which is applied to the intelligent tourism system and includes the following steps.
Step 210: the service equipment sends first indication information to the first mobile terminal when not receiving the position reported by the wearable equipment at a first moment, wherein the first indication information is used for indicating and feeding back the position of the person with visual impairment, and the first moment is a reporting moment when the wearable equipment needs to report the position.
Step 220: and the service equipment receives first feedback information sent by the first mobile terminal, wherein the first feedback information is used for informing the person who does not find the vision disorder.
Step 230: and the service equipment pre-estimates the first position of the vision disorder person according to the positions reported by the wearable equipment at least twice before.
Wherein, the person with visual disorder is blind or person with relatively low vision.
In an implementation manner of the present application, the predicting, by the service device, the first position of the visually impaired person according to the positions reported by the wearable device at least twice before, includes:
the service equipment acquires the height of the vision-impaired person;
the service equipment pre-estimates the step length of the vision-impaired person according to the height;
the service equipment determines the advancing direction of the vision disorder person according to the positions reported at least twice;
and the service equipment pre-estimates the first position according to the positions reported at least twice, the step length and the advancing direction.
Wherein, the height of the person with visual impairment is uploaded to the service device by family members or friends of the person with visual impairment through on-line filling data.
And connecting the positions reported at least twice, and determining the extending direction of the connecting line as the advancing direction of the person with visual impairment.
Optionally, the predicting, by the service device, the first position according to the positions reported at least twice, the step size, and the traveling direction includes:
the service equipment pre-estimates the walking distance of the vision disorder person according to a second moment, the current moment and the step length, wherein the second moment is the moment when the wearable equipment reports the position last time;
and the service equipment pre-estimates the first position according to the position reported last time by the wearable equipment, the walking distance and the traveling direction, wherein the positions reported last two times comprise the position reported last time.
Specifically, assuming that the second time is 17:25:30, the current time is 17:25:35, and the step length is 50cm/s, the walking distance is 250cm, and if the position reported last time by the wearable device is (a1, a2), the first position can be obtained by extending 250cm along the traveling direction with the position as the starting point.
Optionally, the service device estimates the step length of the vision-impaired person according to the height, and includes:
the service equipment pre-estimates the step length of the vision disorder person according to a step length calculation formula and the height, wherein the step length calculation formula is as follows: l1 ═ 0.5 × { k × [ H- (2 × L2)/3] }, the L1 is the step size, the H is the height, the L2 is the footprint length, and the k is the coefficient of action.
In one implementation, the mobility factor of the visually impaired is a fixed value, such as 2/3, 1/2 or other values.
In another implementation, before the serving device estimates the step size of the vision-impaired person according to a step size calculation formula and the height, the method further includes:
the service equipment acquires the action evaluation information of the vision disorder person;
the service equipment analyzes the action evaluation information of the vision disorder person to obtain an action evaluation feature set, wherein the action evaluation feature set comprises at least one action evaluation feature;
the service equipment determines a action evaluation value corresponding to an action evaluation feature included in the action evaluation feature set to obtain an action evaluation value set, wherein the action evaluation value set includes at least one action evaluation value;
the service equipment determines the individual action probability of the vision disorder person according to the action evaluation value set;
the service device determines a mobility coefficient of the vision-impaired person based on the individual mobility probability of the vision-impaired person.
The service device determines the action coefficient of the vision disorder person based on a first mapping relation and the individual action probability of the vision disorder person, wherein the first mapping relation is the mapping relation of the action coefficient and the individual action probability of the vision disorder person, and is shown in table 1.
TABLE 1
Figure BDA0002808144590000061
Figure BDA0002808144590000071
Wherein, the action evaluation information of the vision disorder person is uploaded to the service device by family members or friends of the vision disorder person through on-line filling data. The action evaluation information of the visually impaired person includes action evaluation information in an unfamiliar outdoor environment and action evaluation information in an unfamiliar indoor environment. The expression form of the movement evaluation information of the visually impaired person is, for example: the inability to move alone in a strange outdoor environment, the inconvenience of moving alone in a strange indoor environment, and so on.
Wherein the action assessment features include action assessment features of unfamiliar outdoor environments and action assessment features of unfamiliar indoor environments. The expression of the action evaluation feature is, for example: unfamiliar outdoor environment can not move alone, unfamiliar outdoor environment moves alone inconveniently, unfamiliar indoor environment moves alone inconveniently, etc.
The service device determines a behavior evaluation value corresponding to a behavior evaluation feature included in the behavior evaluation feature set to obtain a behavior evaluation value set, and the method includes: the service equipment determines a behavior evaluation value corresponding to the behavior evaluation feature included in the behavior evaluation feature set according to a second mapping relation between the behavior evaluation feature and the behavior evaluation value to obtain a behavior evaluation value set; wherein the second mapping is shown in table 2.
TABLE 2
Action assessment feature Action evaluation value
The strange outdoor environment cannot be independently moved 0
Independent movement inconvenience in strange outdoor environment 0.5
The strange outdoor environment can be independently operated 1
The strange indoor environment cannot be independently operated 0
Independent movement inconvenience in strange indoor environment 0.5
The strange indoor environment can be independently operatedMovable part 1
…… ……
For example, if the action evaluation information of the visually impaired person is "unable to act alone in an unfamiliar outdoor environment, inconvenient to act alone in an unfamiliar indoor environment". Therefore, the action evaluation information of the visually impaired person is analyzed to obtain an action evaluation feature set (strange outdoor environment cannot be independently moved, and strange indoor environment is not convenient to independently move). From table 2, it can be seen that the action evaluation value set corresponding to the action evaluation feature set is (x1 is 0, x2 is 0.5), x1 is the action evaluation value of the visually impaired in the outdoor environment, and x2 is the action evaluation value of the visually impaired in the indoor environment.
Wherein the service device determines the individual action probability of the vision-impaired person from the action evaluation value set, comprising:
determining the probability of an individual action of the vision-impaired person according to a first calculation formula and the action evaluation value set;
wherein the first calculation formula is:
Figure BDA0002808144590000081
x is the probability of being actionable alone, YiFor the i-th action evaluation value in the action evaluation value set, the KiAnd evaluating the weight of the characteristic for the action corresponding to the ith action evaluation value, wherein n is the number of action evaluation values included in the action evaluation value set.
Wherein the sum of the weights of the at least one action evaluation feature is equal to 1, the weight of each action evaluation feature is the same, and if there are 3 action evaluation features, the weights of the 3 action evaluation features are 1/3.
For example, assuming that the action evaluation value set is (x1 ═ 0, x2 ═ 0.5), if the weight of each action evaluation feature is 0.5, the probability of individual action of the visually impaired person is 0.25 by the first calculation formula.
Alternatively, the weights of different action evaluation features are different, and the weight of each action evaluation feature is preset, as shown in fig. 3.
TABLE 3
Action assessment feature Weight of
The strange outdoor environment cannot be independently moved 0.2
Independent movement inconvenience in strange outdoor environment 0.4
The strange outdoor environment can be independently operated 0.6
The strange indoor environment cannot be independently operated 0.2
Independent movement inconvenience in strange indoor environment 0.3
The strange indoor environment can be independently operated 0.5
…… ……
Step 240: the service equipment acquires first video data acquired by a target image acquisition device, the distance between the position of the target image acquisition device and the first position is smaller than or equal to a first threshold, the plurality of image acquisition devices comprise the target acquisition device, and the middle node of the first video data is the first moment.
Step 250: the service equipment determines a second position of the vision-impaired person according to the first video data.
In an implementation manner of the present application, the determining, by the service device, the second position of the vision-impaired person according to the first video data includes:
the service equipment analyzes the first video data to obtain N face images, wherein N is a positive integer;
if the face image of the person with visual impairment exists in the N face images, the service equipment takes the position of the target image acquisition device as the second position;
if the face image of the vision-impaired person does not exist in the N face images and the face image of the scenic spot worker exists in the N face images, the service equipment sends third indication information to a second mobile terminal of the scenic spot worker, wherein the third indication information carries the face image of the vision-impaired person and is used for indicating and feeding back the position of the vision-impaired person;
the service equipment receives second feedback information sent by the second mobile terminal;
and if the second feedback information carries a third position, the service equipment takes the third position as the second position.
Optionally, the method further comprises:
if the second feedback information is used for informing that the vision disorder person is not found, the service equipment sends third indication information to the first mobile terminal, wherein the third indication information carries the first position and the traveling direction, and the third indication information is used for indicating that the vision disorder person is found according to the first position and the traveling direction.
Wherein, when the service device sends the third indication information to the first mobile terminal, the method further comprises:
the service equipment sends a first information acquisition request to the wearable equipment, wherein the first information acquisition request is used for requesting to acquire vein information in a first time period, and an intermediate node of the first time period is the first moment;
the service equipment receives target vein information sent by the wearable equipment for the first information acquisition request;
if the target vein information is matched with the vein template information of the vision-impaired person, the service equipment sends fourth indication information to the wearable equipment, and the fourth indication information is used for indicating that the vision-impaired person stays still in place.
After the wearable device receives the fourth indication information, the wearable device outputs the fourth indication information in a voice output mode.
Step 260: and the service equipment sends second indication information to the first mobile terminal, wherein the second indication information carries the second position, and the second indication information is used for indicating that the vision disorder person is searched according to the second position.
It can be seen that, in the embodiment of the application, when the wearable device does not receive the position reported by the wearable device at the reporting time when the position needs to be reported by the wearable device, the service device indicates the service person with the vision disorder to feed back the position of the person with the vision disorder first, then if the service person with the vision disorder feeds back the position where the person with the vision disorder is not found, estimates the first position of the person with the vision disorder according to the positions reported at least twice before the wearable device, then obtains the video data collected by the image collection device near the first position, determines the second position of the person with the vision disorder according to the video data, and finally informs the service person with the vision disorder of the second position, so that the service person with the vision disorder can find the person with the vision disorder in time to ensure the safety of the person with the.
In an implementation manner of the present application, after the service device sends the second indication information to the first mobile terminal, the method further includes:
the service equipment acquires a first individual action loss probability of the vision-impaired person;
the service equipment determines a second individual action loss probability of the vision disorder person according to the first individual action loss probability, the first time and the current time;
the service device updates the individual action loss probability of the visually impaired person to the second individual action loss probability;
if the difference value between the second individual action loss probability and the first individual action loss probability is larger than or equal to a preset threshold value, the service equipment sends first reminding information to the first mobile terminal, wherein the first reminding information carries the second individual action loss probability of the vision-impaired person, and the first reminding information is used for reminding the vision-impaired person to pay attention to.
Wherein the first individual action loss probability of the visually impaired person is stored in advance in the service device. The first individual action loss probability is, for example, 10%, 20%, 25%, 35%, or other value. The initial individual action loss probability of the visually impaired is set in advance by the visually impaired attendant.
The predetermined threshold is, for example, 10%, 15%, 20%, or other values.
Optionally, the determining, by the service device, a second individual action loss probability of the vision-impaired person according to the first individual action loss probability, the first time and the current time includes:
the service device determines a second individual action loss probability of the vision-impaired person based on a second calculation formula, the first individual action loss probability, the first time and a current time;
wherein the second calculation formula is: p1=P2+Pk×[(t2-t1)/Tk]Wherein, said P1And said P2Are all single action loss probabilityP is1For the individual action loss probability to be determined, the t2And said t1Are all times, t2Is later than the t1At the time of (P), the PkIs a fixed value, said TkIs a fixed value.
Wherein, PkFor example, 3%, 5%, 10%, 13%, 15% or other values. T iskFor example, 5min, 10min, 15min, 25min or other values.
It can be seen that, in the embodiment of the application, when the vision disorder person loses the situation, the individual action loss probability of the vision disorder person is updated in time, and under a specific situation, the updated probability is sent to the vision disorder person service staff to remind the vision disorder person service staff of mainly the vision disorder person, so that the safety of the vision disorder person is further ensured.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a service device according to an embodiment of the present application, and as shown in the drawing, the service device includes a processor, a memory, a transceiver port, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the following steps:
when the position reported by the wearable device is not received at a first moment, sending first indication information to the first mobile terminal, wherein the first indication information is used for indicating the position of the visually impaired person to be fed back, and the first moment is the reporting moment of the position needing to be reported by the wearable device;
receiving first feedback information sent by the first mobile terminal, wherein the first feedback information is used for informing a person who does not find the vision disorder;
estimating a first position of the vision disorder person according to positions reported by the wearable device at least twice before;
acquiring first video data acquired by a target image acquisition device, wherein the distance between the position of the target image acquisition device and the first position is smaller than or equal to a first threshold, the plurality of image acquisition devices comprise the target acquisition device, and the middle node of the first video data is the first moment;
determining a second location of the visually impaired from the first video data;
and sending second indication information to the first mobile terminal, wherein the second indication information carries the second position, and the second indication information is used for indicating that the vision disorder person is searched according to the second position.
In an implementation manner of the present application, in terms of estimating the first position of the visually impaired according to the positions reported by the wearable device at least twice before, the program includes instructions specifically configured to perform the following steps:
acquiring the height of the person with the visual disorder;
estimating the step length of the vision-impaired person according to the height;
determining the advancing direction of the vision disorder person according to the positions reported at least twice;
and predicting the first position according to the positions reported at least twice, the step length and the traveling direction.
In an implementation of the application, in estimating the stride length of the visually impaired person from the height, the program comprises instructions for performing the steps of:
estimating the step length of the vision-impaired person according to a step length calculation formula and the height, wherein the step length calculation formula is as follows: l1 ═ 0.5 × { k × [ H- (2 × L2)/3] }, the L1 is the step size, the H is the height, the L2 is the footprint length, and the k is the coefficient of action.
In one implementation of the present application, before estimating the step size of the visually impaired person based on the step size calculation formula and the height, the program includes instructions further for:
acquiring action evaluation information of the vision disorder person;
analyzing the action evaluation information of the vision disorder person to obtain an action evaluation feature set, wherein the action evaluation feature set comprises at least one action evaluation feature;
determining action evaluation values corresponding to action evaluation features included in the action evaluation feature set to obtain an action evaluation value set, wherein the action evaluation value set includes at least one action evaluation value;
determining an individual actionable probability of the vision-impaired person from the set of action assessment values;
determining a mobility coefficient of the vision-impaired person based on the individual mobility probabilities of the vision-impaired person.
In one implementation of the present application, in determining the second location of the vision-impaired person based on the first video data, the program includes instructions for further performing the steps of:
analyzing the first video data to obtain N face images, wherein N is a positive integer;
if the human face images of the vision-impaired people exist in the N human face images, taking the position of the target image acquisition device as the second position;
if the face image of the vision-impaired person does not exist in the N face images and the face image of the scenic spot worker exists in the N face images, sending third indication information to a second mobile terminal of the scenic spot worker, wherein the third indication information carries the face image of the vision-impaired person and is used for indicating and feeding back the position of the vision-impaired person;
receiving second feedback information sent by the second mobile terminal;
and if the second feedback information carries a third position, taking the third position as the second position.
It should be noted that, for the specific implementation process of the present embodiment, reference may be made to the specific implementation process described in the above method embodiment, and a description thereof is omitted here.
Referring to fig. 4, fig. 4 is a view illustrating a device for monitoring the position of a visually impaired person in a scenic spot, which is applied to a service device in an intelligent tourism system according to an embodiment of the present application, and the device includes:
a sending unit 410, configured to send first indication information to the first mobile terminal when the position reported by the wearable device is not received at a first time, where the first indication information is used to indicate that the position of the visually impaired person is fed back, and the first time is a reporting time at which the wearable device needs to report the position;
a receiving unit 420, configured to receive first feedback information sent by the first mobile terminal, where the first feedback information is used to inform a person without finding the vision disorder;
the estimating unit 430 is configured to estimate a first position of the visually impaired according to positions reported by the wearable device at least twice before;
an obtaining unit 440, configured to obtain first video data collected by a target image collection device, where a distance between a position of the target image collection device and the first position is smaller than or equal to a first threshold, where the plurality of image collection devices include the target collection device, and a middle node of the first video data is the first time;
a determining unit 450 for determining a second position of the visually impaired person from the first video data;
the sending unit 410 is further configured to send second indication information to the first mobile terminal, where the second indication information carries the second location, and the second indication information is used to indicate that the vision-impaired person is to be found according to the second location.
In an implementation manner of the present application, in estimating the first position of the visually impaired according to the positions reported by the wearable device at least twice, the estimating unit 430 is specifically configured to:
acquiring the height of the person with the visual disorder;
estimating the step length of the vision-impaired person according to the height;
determining the advancing direction of the vision disorder person according to the positions reported at least twice;
and predicting the first position according to the positions reported at least twice, the step length and the traveling direction.
In an implementation manner of the present application, in estimating the step length of the vision-impaired person according to the height, the estimating unit 430 is specifically configured to:
estimating the step length of the vision-impaired person according to a step length calculation formula and the height, wherein the step length calculation formula is as follows: l1 ═ 0.5 × { k × [ H- (2 × L2)/3] }, the L1 is the step size, the H is the height, the L2 is the footprint length, and the k is the coefficient of action.
In an implementation manner of the present application, before estimating the step size of the vision-impaired person according to the step size calculation formula and the height, the estimating unit 430 is further configured to:
acquiring action evaluation information of the vision disorder person;
analyzing the action evaluation information of the vision disorder person to obtain an action evaluation feature set, wherein the action evaluation feature set comprises at least one action evaluation feature;
determining action evaluation values corresponding to action evaluation features included in the action evaluation feature set to obtain an action evaluation value set, wherein the action evaluation value set includes at least one action evaluation value;
determining an individual actionable probability of the vision-impaired person from the set of action assessment values;
determining a mobility coefficient of the vision-impaired person based on the individual mobility probabilities of the vision-impaired person.
In an implementation manner of the present application, in determining the second position of the vision-impaired person according to the first video data, the determining unit 450 is specifically configured to:
analyzing the first video data to obtain N face images, wherein N is a positive integer;
if the human face images of the vision-impaired people exist in the N human face images, taking the position of the target image acquisition device as the second position;
if the face image of the vision-impaired person does not exist in the N face images and the face image of the scenic spot worker exists in the N face images, sending third indication information to a second mobile terminal of the scenic spot worker, wherein the third indication information carries the face image of the vision-impaired person and is used for indicating and feeding back the position of the vision-impaired person;
receiving second feedback information sent by the second mobile terminal;
and if the second feedback information carries a third position, taking the third position as the second position.
It should be noted that the estimation unit 430, the obtaining unit 440, and the determination unit 450 may be implemented by a processor, and the sending unit 410 and the receiving unit 420 may be implemented by a transceiver.
The present application also provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the service device in the above method embodiments.
Embodiments of the present application also provide a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps described in the service apparatus in the method. The computer program product may be a software installation package.
The steps of a method or algorithm described in the embodiments of the present application may be implemented in hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in an access network device, a target network device, or a core network device. Of course, the processor and the storage medium may reside as discrete components in an access network device, a target network device, or a core network device.
Those skilled in the art will appreciate that in one or more of the examples described above, the functionality described in the embodiments of the present application may be implemented, in whole or in part, by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Video Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the embodiments of the present application in further detail, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present application, and are not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (10)

1. A position monitoring method of a vision-impaired person in a scenic spot is characterized by being applied to service equipment in an intelligent tourism system, wherein the intelligent tourism system further comprises wearing equipment of the vision-impaired person, a first mobile terminal of a vision-impaired person service staff and a plurality of image acquisition devices arranged in the scenic spot; the wearable device is used for periodically reporting the position of the vision-impaired person under the condition of free movement in the scenic spot; the method comprises the following steps:
when the position reported by the wearable device is not received at a first moment, sending first indication information to the first mobile terminal, wherein the first indication information is used for indicating the position of the visually impaired person to be fed back, and the first moment is the reporting moment of the position needing to be reported by the wearable device;
receiving first feedback information sent by the first mobile terminal, wherein the first feedback information is used for informing a person who does not find the vision disorder;
estimating a first position of the vision disorder person according to positions reported by the wearable device at least twice before;
acquiring first video data acquired by a target image acquisition device, wherein the distance between the position of the target image acquisition device and the first position is smaller than or equal to a first threshold, the plurality of image acquisition devices comprise the target acquisition device, and the middle node of the first video data is the first moment;
determining a second location of the visually impaired from the first video data;
and sending second indication information to the first mobile terminal, wherein the second indication information carries the second position, and the second indication information is used for indicating that the vision disorder person is searched according to the second position.
2. The method of claim 1, wherein estimating the first position of the visually impaired according to the positions reported by the wearable device at least twice before comprises:
acquiring the height of the person with the visual disorder;
estimating the step length of the vision-impaired person according to the height;
determining the advancing direction of the vision disorder person according to the positions reported at least twice;
and predicting the first position according to the positions reported at least twice, the step length and the traveling direction.
3. The method of claim 2, wherein estimating the stride length of the vision-impaired person based on the height comprises:
estimating the step length of the vision-impaired person according to a step length calculation formula and the height, wherein the step length calculation formula is as follows: l1 ═ 0.5 × { k × [ H- (2 × L2)/3] }, the L1 is the step size, the H is the height, the L2 is the footprint length, and the k is the coefficient of action.
4. The method of claim 3, wherein prior to estimating the step size of the vision-impaired person based on the step size calculation formula and the height, the method further comprises:
acquiring action evaluation information of the vision disorder person;
analyzing the action evaluation information of the vision disorder person to obtain an action evaluation feature set, wherein the action evaluation feature set comprises at least one action evaluation feature;
determining action evaluation values corresponding to action evaluation features included in the action evaluation feature set to obtain an action evaluation value set, wherein the action evaluation value set includes at least one action evaluation value;
determining an individual actionable probability of the vision-impaired person from the set of action assessment values;
determining a mobility coefficient of the vision-impaired person based on the individual mobility probabilities of the vision-impaired person.
5. The method of any of claims 1-4, wherein said determining a second location of the visually impaired from the first video data comprises:
analyzing the first video data to obtain N face images, wherein N is a positive integer;
if the human face images of the vision-impaired people exist in the N human face images, taking the position of the target image acquisition device as the second position;
if the face image of the vision-impaired person does not exist in the N face images and the face image of the scenic spot worker exists in the N face images, sending third indication information to a second mobile terminal of the scenic spot worker, wherein the third indication information carries the face image of the vision-impaired person and is used for indicating and feeding back the position of the vision-impaired person;
receiving second feedback information sent by the second mobile terminal;
and if the second feedback information carries a third position, taking the third position as the second position.
6. The position monitoring device for the vision-impaired people in the scenic spot is characterized by being applied to service equipment in an intelligent tourism system, wherein the intelligent tourism system further comprises wearing equipment for the vision-impaired people, a first mobile terminal for a vision-impaired person service staff and a plurality of image acquisition devices arranged in the scenic spot; the wearable device is used for periodically reporting the position of the vision-impaired person under the condition of free movement in the scenic spot; the device comprises:
a sending unit, configured to send first indication information to the first mobile terminal when the position reported by the wearable device is not received at a first time, where the first indication information is used to indicate that the position of the visually impaired person is fed back, and the first time is a reporting time at which the wearable device needs to report the position;
a receiving unit, configured to receive first feedback information sent by the first mobile terminal, where the first feedback information is used to inform a person who does not find the vision disorder;
the pre-estimation unit is used for pre-estimating the first position of the vision disorder person according to the positions reported by the wearable device at least twice before;
the device comprises an acquisition unit, a first time calculation unit and a second time calculation unit, wherein the acquisition unit is used for acquiring first video data acquired by a target image acquisition device, the distance between the position of the target image acquisition device and the first position is smaller than or equal to a first threshold, the plurality of image acquisition devices comprise the target acquisition device, and the middle node of the first video data is the first time;
a determining unit for determining a second position of the visually impaired person from the first video data;
the sending unit is further configured to send second indication information to the first mobile terminal, where the second indication information carries the second location, and the second indication information is used to indicate that the vision-impaired person is to be found according to the second location.
7. The apparatus of claim 6, wherein, in estimating the first position of the visually impaired according to the positions reported by the wearable device at least twice before, the estimating unit is specifically configured to:
acquiring the height of the person with the visual disorder;
estimating the step length of the vision-impaired person according to the height;
determining the advancing direction of the vision disorder person according to the positions reported at least twice;
and predicting the first position according to the positions reported at least twice, the step length and the traveling direction.
8. The apparatus of claim 7, wherein in estimating the stride length of the visually impaired based on the height, the estimating unit is specifically configured to:
estimating the step length of the vision-impaired person according to a step length calculation formula and the height, wherein the step length calculation formula is as follows: l1 ═ 0.5 × { k × [ H- (2 × L2)/3] }, the L1 is the step size, the H is the height, the L2 is the footprint length, and the k is the coefficient of action.
9. A serving device comprising a processor, a memory, a transceiver, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN202011375733.5A 2020-11-30 2020-11-30 Method and device for monitoring position of vision-impaired person in scenic spot Active CN112434626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011375733.5A CN112434626B (en) 2020-11-30 2020-11-30 Method and device for monitoring position of vision-impaired person in scenic spot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011375733.5A CN112434626B (en) 2020-11-30 2020-11-30 Method and device for monitoring position of vision-impaired person in scenic spot

Publications (2)

Publication Number Publication Date
CN112434626A true CN112434626A (en) 2021-03-02
CN112434626B CN112434626B (en) 2022-08-02

Family

ID=74698950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011375733.5A Active CN112434626B (en) 2020-11-30 2020-11-30 Method and device for monitoring position of vision-impaired person in scenic spot

Country Status (1)

Country Link
CN (1) CN112434626B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000293785A (en) * 1999-04-02 2000-10-20 Nec Corp Method and system for searching lost child
CN107393256A (en) * 2017-07-31 2017-11-24 深圳前海弘稼科技有限公司 Method for preventing missing, server and terminal equipment
CN108960048A (en) * 2018-05-23 2018-12-07 国政通科技股份有限公司 A kind of searching method and system for the missing tourist in scenic spot based on big data
CN110322660A (en) * 2019-07-18 2019-10-11 陈硕坚 Protection of the child method and system based on Intelligent bracelet, big data and internet
CN110706451A (en) * 2019-10-15 2020-01-17 上海无线电设备研究所 Anti-lost device, anti-lost system and anti-lost method
CN111025232A (en) * 2019-11-29 2020-04-17 泰康保险集团股份有限公司 Bluetooth positioning method, Bluetooth positioning device, electronic equipment and storage medium
CN111035542A (en) * 2019-12-24 2020-04-21 开放智能机器(上海)有限公司 Intelligent blind guiding system based on image recognition
CN111951968A (en) * 2020-08-24 2020-11-17 杭州宣迅电子科技有限公司 Big data-based tourism safety early warning management system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000293785A (en) * 1999-04-02 2000-10-20 Nec Corp Method and system for searching lost child
CN107393256A (en) * 2017-07-31 2017-11-24 深圳前海弘稼科技有限公司 Method for preventing missing, server and terminal equipment
CN108960048A (en) * 2018-05-23 2018-12-07 国政通科技股份有限公司 A kind of searching method and system for the missing tourist in scenic spot based on big data
CN110322660A (en) * 2019-07-18 2019-10-11 陈硕坚 Protection of the child method and system based on Intelligent bracelet, big data and internet
CN110706451A (en) * 2019-10-15 2020-01-17 上海无线电设备研究所 Anti-lost device, anti-lost system and anti-lost method
CN111025232A (en) * 2019-11-29 2020-04-17 泰康保险集团股份有限公司 Bluetooth positioning method, Bluetooth positioning device, electronic equipment and storage medium
CN111035542A (en) * 2019-12-24 2020-04-21 开放智能机器(上海)有限公司 Intelligent blind guiding system based on image recognition
CN111951968A (en) * 2020-08-24 2020-11-17 杭州宣迅电子科技有限公司 Big data-based tourism safety early warning management system

Also Published As

Publication number Publication date
CN112434626B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
US9293025B2 (en) Emergency detection and alert apparatus with floor elevation learning capabilities
US20180137735A1 (en) Abnormality detection method, recording medium, and information processing apparatus
US10805767B2 (en) Method for tracking the location of a resident within a facility
CN114942434B (en) Fall gesture recognition method and system based on millimeter wave Lei Dadian cloud
JP6351860B2 (en) Action identification device, air conditioner and robot control device
CN110933632A (en) Terminal indoor positioning method and system
CN106028391B (en) People flow statistical method and device
WO2013138174A1 (en) Location correction
JP2020024688A (en) Information service system, information service method, and program
CN112434626B (en) Method and device for monitoring position of vision-impaired person in scenic spot
JP2018073389A (en) Data processing device and data processing method
US20220022007A1 (en) Integrated intelligent building management system
CN113129334A (en) Object tracking method and device, storage medium and wearable electronic equipment
US20220189627A1 (en) Drive-through medical treatment system and drive-through medical treatment method
CN112365153B (en) Method for making travel plan of vision-impaired person and related device
CN115002657B (en) Medicine monitoring method and system based on multidimensional information acquisition and intelligent processing
CN106940720B (en) Multi-source information processing method and system based on healthy Internet of things
KR102572895B1 (en) Apparatus for PDR Based on Deep Learning using multiple sensors embedded in smartphones and GPS location signals and method thereof
Khaoampai et al. FloorLoc-SL: Floor localization system with fingerprint self-learning mechanism
US20170224254A1 (en) Analyzing system and analyzing method for evaluating calorie consumption by detecting the intensity of wireless signal
CN110260884B (en) Biological monitoring method, terminal and server
CN111486990A (en) Human body state monitoring method, system, device and storage medium
CN112985409B (en) Navigation method and related device for vision disorder person
US20230108162A1 (en) Data collection system, data collection device, data acquisition device, and data collection method
KR101627533B1 (en) System and method for predicting of user situation, and recording medium for performing the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant