CN111046234B - Information display method, corresponding device, terminal, server and storage medium - Google Patents

Information display method, corresponding device, terminal, server and storage medium Download PDF

Info

Publication number
CN111046234B
CN111046234B CN201911108492.5A CN201911108492A CN111046234B CN 111046234 B CN111046234 B CN 111046234B CN 201911108492 A CN201911108492 A CN 201911108492A CN 111046234 B CN111046234 B CN 111046234B
Authority
CN
China
Prior art keywords
data
building
target building
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911108492.5A
Other languages
Chinese (zh)
Other versions
CN111046234A (en
Inventor
郭进毫
张春雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201911108492.5A priority Critical patent/CN111046234B/en
Publication of CN111046234A publication Critical patent/CN111046234A/en
Application granted granted Critical
Publication of CN111046234B publication Critical patent/CN111046234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the application discloses an information display method, a corresponding device, a terminal, a server and a storage medium, wherein the method comprises the following steps: sending a building information display request to a server; receiving target building data and current position information of the wearable device, wherein the target building data comprise building data of buildings within a preset distance from the current position of the wearable device; extracting the image characteristics of the monitored video frame image as characteristic data to be identified; searching and matching the feature data to be identified with target feature data in target building data to determine target building information; and adding the target building information to the video frame image for display. The method has the advantages that parents can intuitively acquire the surrounding environment of the child when communicating with the wearable device of the child through the mobile phone, so that whether the child is safe or not can be known in time, and interaction experience is improved.

Description

Information display method, corresponding device, terminal, server and storage medium
Technical Field
The embodiment of the application relates to an image recognition and reality technology, in particular to an information display method, a corresponding device, a terminal, a server and a storage medium.
Background
Along with the progress of science and technology and the improvement of living standard of people, more electronic products enter the lives of people originally, and the electronic products bring great convenience to the lives of people. For example, a parent may make a voice or video call with a child's wearable device through his own cell phone in order to pay attention to the child's dynamics in time.
In the related art, when a family mobile phone and a child carry out video call, a parent usually wants to know whether the current environment of the child is safe or not, however, the wearable device supporting video call at present only supports displaying the street information of the current position of the wearable device, and the parent cannot intuitively know the surrounding environment of the child, so that whether the child is safe or not cannot be judged.
Disclosure of Invention
The application provides an information display method, a corresponding device, a terminal, a server and a storage medium, which are used for solving the problems that parents cannot intuitively acquire surrounding environments of children in the prior art, and the parents cannot know whether the surrounding environments are safe or not and interaction experience is poor.
The application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides an information display method, including:
Sending a building information display request to a server;
receiving target building data and current position information of a wearable device, wherein the target building data comprise building data of a building within a preset distance from the current position of the wearable device, and the target building data are sent by a server;
extracting the image characteristics of the monitored video frame image as characteristic data to be identified;
searching and matching the characteristic data to be identified with target characteristic data in the target building data to determine target building information;
and adding the target building information to the video frame image for display.
In a second aspect, an embodiment of the present application provides an information display method, including:
receiving a building information display request from a terminal;
acquiring the current position of the wearable device;
searching in the pre-stored picture characteristic data to determine building data of a building within a preset distance from the current position of the wearable equipment as target building data;
and sending the target building data and the current position information of the wearable equipment to a terminal so as to instruct the terminal to determine target building information and add the target building information to a corresponding video frame image for display.
In a third aspect, an embodiment of the present application provides an information display apparatus, including:
the request sending module is used for sending a building information display request to the server;
the system comprises a data receiving module, a data processing module and a data processing module, wherein the data receiving module is used for receiving target building data and current position information of a wearable device, wherein the target building data is sent by a server, and the target building data comprises building data of a building within a preset distance from the current position of the wearable device;
the feature extraction module is used for extracting the image features of the monitored video frame images as feature data to be identified;
the target building information determining module is used for carrying out retrieval matching on the characteristic data to be identified and target characteristic data in the target building data so as to determine target building information;
and the display module is used for adding the target building information to the video frame image for display.
In a fourth aspect, an embodiment of the present application provides an information display apparatus, including:
the request receiving module is used for receiving a building information display request from the terminal;
the position acquisition module is used for acquiring the current position of the wearable equipment;
the target building data determining module is used for searching in the pre-stored picture characteristic data to determine building data of a building within a preset distance from the current position of the wearable equipment as target building data;
And the data sending module is used for sending the target building data and the current position information of the wearable equipment to a terminal so as to instruct the terminal to determine the target building information and add the target building information to a corresponding video frame image for display.
In a fifth aspect, embodiments of the present application provide a terminal, including a memory and one or more processors;
the memory is used for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the information display method as described in the first aspect.
In a sixth aspect, embodiments of the present application provide a server comprising a memory and one or more processors;
the memory is used for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the information display method as described in the second aspect.
In a seventh aspect, embodiments of the present application provide a storage medium containing computer executable instructions which, when executed by a computer processor, are adapted to carry out the information display method according to the first and second aspects.
The technical scheme adopted by the application has the following beneficial effects: the method comprises the steps that a building information display request is sent to a server through a terminal, then target building data sent by the server and current position information of the wearable equipment are received, the target building data comprise building data of buildings within a preset distance from the current position of the wearable equipment, and therefore the terminal does not need to consider all building data in the processing process, and the searching matching speed and searching matching accuracy are improved; in addition, the terminal extracts the monitored image features of the video frame image as feature data to be identified, retrieves and matches the feature data to be identified with target feature data in target building data to determine target building information, and finally adds the target building information to the video frame image for display, so that the parents can know the surrounding environment of the child in time in a visual mode, the parents can conveniently judge whether the current environment of the child is safe or not, and interaction experience is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
Fig. 1 is a flowchart of an information display method according to an embodiment of the present application;
FIG. 2 is a flowchart of another information display method according to an embodiment of the present application;
FIG. 3 is a diagram of a terminal display interface according to an embodiment of the present application;
FIG. 4 is a flowchart of another information display method according to an embodiment of the present application;
FIG. 5 is a flowchart of another information display method according to an embodiment of the present application;
fig. 6 is a signaling flow chart of an information display method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an information display device according to an embodiment of the present application;
fig. 8 is a schematic structural view of another information display device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the following detailed description of specific embodiments of the present application is given with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the matters related to the present application are shown in the accompanying drawings. Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
First, the basic concept used in the embodiments of the present application will be described. Color moment, a simple and efficient representation of color, is mathematically based on: the color distribution of any image can be represented by its respective moments, and since most of the information of the color distribution is concentrated on its low moment, the overall color distribution of the image can be approximated by the first, second and third moments of the color, wherein the first moment may be, for example, a mean value, the second moment may be, for example, a variance, and the third moment may be, for example, a skewness. In addition, the RANSC (Random Sample Consensus, random sample consistency) algorithm is an algorithm for calculating mathematical model parameters of data from a set of sample data sets including abnormal data, and obtaining valid sample data. In addition, the LSH (Locality Sensitive Hashing ) algorithm will have a high probability of being arbitrarily adjacent after two adjacent data in the high-dimensional data space are mapped into the low-dimensional data space; while two data that are not originally adjacent will have a high probability of not being adjacent in a low dimensional space. By such mapping, neighboring data points can be found in the low-dimensional data space, avoiding finding in the high-dimensional data space, avoiding the drawbacks that can be time-consuming in the high-dimensional space, and hash mapping of this nature is referred to as being locally sensitive.
Second, the terminal may be, for example, a parent's mobile phone and the wearable device may be, for example, a child phone watch. The application scene of the embodiment of the application is that a parent can carry out video call with the wearable device through the mobile phone, so that not only can the current position street information of the child be displayed, but also the surrounding environment condition of the wearable device can be displayed on the mobile phone in a visual mode, thereby being convenient for the parent to know the position and the safety condition of the child in time.
Fig. 1 is a flowchart of an information display method according to an embodiment of the present application, where the confidence display method according to the embodiment may be performed by an information display device, which may be generally integrated in a terminal, and the information display device may be implemented by hardware and/or software. Referring to fig. 1, the method may specifically include:
s101, sending a building information display request to a server.
Specifically, when a parent has a need to see the surrounding environment of the child, a request for displaying building information can be sent to the server, wherein the process can occur when a video call link is connected with the child phone watch. In a specific example, the video call may be implemented through a special APP (Application program), such as opening the APP, entering a video call invitation interface, and the current interface provides a hover button to the parent, and clicking the hover button may send a building information display request to the server. In addition, if parents do not see the requirements of the surrounding environment of the children, the floating button can be adjusted to be in a hidden state, and if the parents are required, the parents can quickly click and send the request. In addition, an audio acquisition button can be arranged on the current display page of the APP and used for receiving a building information display request in an audio mode of a parent. The two ways are merely examples, and are not intended to be limiting.
S102, receiving target building data and current position information of the wearable device, wherein the target building data comprise building data of buildings within a preset distance from the current position of the wearable device, and the current position information is sent by a server.
Specifically, the server sends building data of a building within a preset distance from the current position of the wearable device and current position information of the wearable device to the terminal, wherein the preset distance can be 500 meters. Thus, the parent's cell phone can receive the building data of the building within 500 meters from the child phone watch and the current position information of the child phone watch. The handset saves the data and information locally for later application in retrieving matches.
Optionally, the server stores the picture feature data of the buildings with preset quantity in the preset area in advance. The preset area may be preset by a parent through a mobile phone, for example, a place where a child frequently goes or a place where the child is likely to go, for example, a school, an amusement park, a milk cow, and the like. The preset number can be some data meeting preset conditions, for example, the building height is more than 30 meters, or the building reaches a certain scale, so that the building has strong significances, and parents can quickly recognize, identify, search and match results to judge.
S103, extracting the image characteristics of the monitored video frame image as characteristic data to be identified.
In the video call process of parents and children through the mobile phone and the child phone watch, video images are formed by video frame images of one frame and one frame, the mobile phone monitors each frame of video frame image in real time, and then the video frame images are displayed after being rendered, so that the parents and the children look like smooth video images. The monitoring process can be realized through VR (Virtual Reality) recognition monitoring, the function is closed after the arrow is recognized, and the function is realized through automatic detection of a mobile phone without operation of parents. For practical situations, after a parent requests to a server to acquire the surrounding environment of the child, the mobile phone monitors each frame of video image and extracts the image characteristics of the current video frame image as characteristic data to be identified.
In a specific example, the feature data to be identified is color moment data, which may include a first moment, a second moment and a third moment, that is, for each frame of video frame image, the first moment, the second moment and the third moment are extracted, and the first moment, the second moment and the third moment may carry the identification of the frame of video frame image, where each group of the first moment, the second moment and the third moment may uniquely determine one frame of video frame image.
And S104, searching and matching the feature data to be identified with target feature data in the target building data to determine target building information.
The target building data comprises target feature data, and optionally, the target feature data is color moment data, namely, first moment, second moment and third moment of a building image within a preset distance from the current position of the child phone watch. Specifically, the feature data to be identified is retrieved and matched with the target feature data in the target building data, and the corresponding matching degree can be calculated, for example, the matching degree is 100% when the corresponding first moment, second moment and third moment are all matched. Thus, the matching degree of the feature data to be identified and the target feature data of each target building can be calculated, and the target building information is determined according to the matching degree. In a specific example, the target building information may be building information of a building having the highest matching degree, or may be building information of a building having the highest matching degree and the next highest matching degree.
S105, adding the target building information to the video frame image for display.
Specifically, the target building information is added to the corresponding video frame image for display, and in an actual application situation, if the child is in a moving state, the target building information corresponding to each frame of video frame image may be different. Accordingly, in the process of adding the target building information to the video frame image, the correspondence relationship of the target building information and the video frame image can be determined by identifying the time stamp of the current video frame.
The technical scheme adopted by the application has the following beneficial effects: the method comprises the steps that a building information display request is sent to a server through a terminal, then target building data sent by the server and current position information of the wearable equipment are received, the target building data comprise building data of buildings within a preset distance from the current position of the wearable equipment, and therefore the terminal does not need to consider all building data in the processing process, and the searching matching speed and searching matching accuracy are improved; in addition, the terminal extracts the monitored image features of the video frame image as feature data to be identified, retrieves and matches the feature data to be identified with target feature data in target building data to determine target building information, and finally adds the target building information to the video frame image for display, so that the parents can know the surrounding environment of the child in time in a visual mode, the parents can conveniently judge whether the current environment of the child is safe or not, and interaction experience is improved.
On the basis of the above embodiment, fig. 2 shows a flowchart of another information display method according to an embodiment of the present application. The information display method is embodied with the information display. Referring to fig. 2, the information display method includes:
S201, establishing video call link connection with the wearable device.
Specifically, before sending a building information display request to the server, the terminal establishes a video call link connection with the wearable device, so that it is indicated that the parent has started to perform a video call with the child, and the parent can see the surrounding environment of the child, but cannot learn detailed building information. In a specific example, the terminal may initiate VR functions when a video call link connection is established to facilitate transmission of video frame images to the wearable device during a video call. In addition, SDK (Software DevelopmentKit ) initialization may also be performed in preparation for sending a building information display request to the server.
S202, sending a building information display request to a server.
S203, receiving target building data and current position information of the wearable device, wherein the target building data comprise building data of buildings within a preset distance from the current position of the wearable device, and the current position information is sent by a server.
S204, extracting the image characteristics of the monitored video frame image as characteristic data to be identified.
S205, calculating the retrieval matching degree of the feature data to be identified and the target feature data in the target building data.
Specifically, the LSH algorithm may be used to calculate the search matching degree of the feature data to be identified and the target feature data in the target building data, and the rule of the matching degree may be implemented by modifying the parameters of the LSH algorithm. For example, if the first moment matching is successful, the search matching degree is determined to be 30%; the second moment matching is successful, and the retrieval matching degree is determined to be 70%; the third-order moment matching is successful, and the search matching degree is determined to be 100%. Therefore, the LSH algorithm is applied to search and match data points in the low-dimensional space, so that the search time is saved, and the search matching degree is improved.
S206, filtering the result of the search mismatching according to the search matching degree, and determining building information corresponding to the target feature data in the result that the search matching degree meets the preset matching degree rule as target building information.
For example, a RANSC algorithm may be applied to filter the result of the search mismatch according to the search matching degree, and the RANSC algorithm may calculate data model parameters of the data according to a set of sample data sets including abnormal data, to obtain valid sample data. In this way, some data of the retrieval mismatching can be removed, and the preset matching degree rule can be the highest determined matching degree or the highest determined matching degree and the next highest determined matching degree. For example, building information corresponding to the target feature data having the highest search matching degree is determined as target building information, such as a hundred-degree building. Because the mobile phone stores the target building data from the server, and the target building data comprises the corresponding relation between the building information and the target feature data, the building information corresponding to the target feature data can be rapidly determined through the target feature data.
S207, identifying the position of the target building and the name of the target building in the target building information.
Specifically, the position of the target building and the name of the target building in the target building information are identified, wherein the position of the target building can be represented by longitude and latitude, the position information can be acquired in a data acquisition stage of a server, and a one-to-one correspondence relationship between the picture feature data of the building and the position is established. In addition, the name of the building is acquired in the data acquisition stage, and the building is only required to be identified according to the corresponding relation of the target characteristic data.
S208, calculating a position difference according to the current position of the wearable device and the position of the target building.
The current position of the wearable device can also be a longitude and latitude position, at this time, a position difference can be calculated according to the current position of the wearable device and the position of the target building, the position difference can clearly indicate the current position of the child, and accurate positioning of the child position by taking the building as a reference is realized. For example, children are 165 meters from a digital science and technology square. In the practical application process, the number of target buildings can be two, so that parents can be more helped to determine the position of the child and judge whether the surrounding environment is safe or not.
And S209, adding the position difference and the name of the target building to the video frame image for display.
Specifically, the at least one position difference and the name of the at least one building are added to the video frame image for display, so that parents can intuitively and visually see the conditions of the buildings around the children. In a specific example, fig. 3 shows a terminal display interface diagram, where a child is 7 meters away from a hundred degrees building and 165 meters away from a digital science and technology square, so that a parent can timely acquire location information of the surrounding environment, timely determine whether the child is safe or not, or timely arrive when a potential safety hazard exists.
In the embodiment of the application, the terminal firstly establishes the video call link connection with the wearable equipment, so that the terminal can provide parents with the option of selecting whether the peripheral building condition needs to be displayed or not in time; in addition, in the video call process, the image features of the monitored video frame image are extracted to be feature data to be identified, the retrieval matching degree of the feature data to be identified and the target feature data in the target building feature data is calculated, and the mismatching result can be filtered according to the retrieval matching degree, so that building information corresponding to at least one group of target feature data can be determined as target building information, different target building information can be comprehensively considered by parents to analyze, and finally the position, name and position difference of the current position of the target building and the wearable device are added to the video frame image to be displayed. The name and the position difference of the target building are intuitively displayed in the video call in a visual mode, so that parents can know the surrounding environment of the children in time.
In addition, building information around the wearable device is displayed in real time in the mobile phone video call interface, for example, the building information can be an office building, a shop or a hospital, and the like, and the office building, the shop or the hospital is approximately located away from a child. In addition, AR (Augmented Reality ) technology can be applied to identify the buildings around the wearable device, so that the identification result is more accurate, and the subsequent retrieval matching process is more accurate. Thus, parents can intuitively know the current position of the child and the buildings around the child, and the building information is displayed in a text mode. The information of the video call is further enriched, and the safety feeling and interactive experience of the user are improved.
Fig. 4 is a flowchart of an information display method according to an embodiment of the present application, where the confidence display method according to the embodiment may be performed by an information display device, which may be generally integrated in a server, and the information display device may be implemented by hardware and/or software. Referring to fig. 4, the method may specifically include:
s401, receiving a building information display request from a terminal.
Specifically, when a parent needs to see the environment around the child, the parent can send a building information display request to the server through the terminal, and the server responds after receiving the request. The manner how the terminal sends the building information display request may refer to the foregoing explanation of the embodiment of the present application, and details are not repeated here.
S402, acquiring the current position of the wearable device.
The server may acquire the current position of the wearable device after receiving the information display request of the terminal, where the manner of acquiring the current position of the wearable device may be an active acquisition manner or a passive reception manner, and in the passive reception manner, the wearable device may periodically send its current position to the server.
And S403, searching in the pre-stored picture characteristic data to determine building data of a building within a preset distance from the current position of the wearable device as target building data.
Specifically, the server stores the collected picture feature data of the building data in advance, then searches the pre-stored picture feature data, and the searching process can also search the picture feature data of the building at a position far away from the wearable device by taking the position difference as a reference, for example, the picture feature data of the building at a position far away from the wearable device is filtered, the preset distance can be 500 meters, so that the rest of the building is within 500 meters, and the building data corresponding to the rest of the picture feature data is taken as target building data. And the building data may include a building name, a building location, and the like, in addition to the picture building data. Therefore, all building data are not required to be sent to the terminal, the data processing capacity of the terminal is reduced, and the searching and matching speed of the terminal is improved.
And S404, sending the target building data and the current position information of the wearable equipment to the terminal so as to instruct the terminal to determine the target building information and add the target building information to the corresponding video frame image for display.
Specifically, the screened target building data and the current position information of the wearable device are sent to the terminal, and then the terminal can determine the target building information according to the feature data to be identified and the target feature data in the target building data, which are obtained by extracting the video frame images. The specific manner of determining the target building information may refer to the above description in the embodiments of the present application, and will not be repeated here.
In the embodiment of the application, the server receives the building information display request from the terminal, then acquires the current position of the wearable equipment, and searches in the pre-stored picture feature data to determine the building data of the building within the preset distance from the current position of the wearable equipment as the target building data, so that the building data can be primarily screened, the building data of the building far away from the wearable equipment is removed, and the accuracy rate in determining the surrounding building is improved; in addition, the target building data and the current position information of the wearable device are sent to the terminal so as to instruct the terminal to determine the target building information and add the target building information to the corresponding video frame image for display. The environment conditions around the wearable equipment are visually displayed on the terminal in the video call process in a visual mode, so that parents can refer to the environment conditions.
On the basis of the above embodiment, fig. 5 shows a flowchart of another information display method according to an embodiment of the present application. The information display method is embodied with the information display. Referring to fig. 5, the information display method includes:
s501, pre-storing the building picture data of the preset quantity in the preset area.
Specifically, in the data acquisition stage, the ground personnel can hold the acquisition equipment and go to the preset area to acquire the preset number of building picture data, and the building picture data can include building pictures, building names and building positions, wherein the building positions can be longitude and latitude coordinates and the like. In a specific example, the collecting device uploads the corresponding building picture data to the server after completing data collection, and the server stores a preset number of building picture data in the preset area in advance.
S502, extracting features of the building picture data to be stored as picture feature data.
Optionally, the color moment data of the building picture data is extracted and stored as picture feature data. By way of example, a color moment method is applied, feature data in building pictures are extracted and stored as picture feature data, and in the storage process, position information of buildings corresponding to each group of picture feature data can be matched. For example, table 1 shows a picture characteristic data table, and it can be seen from table 1 that the position of each building is represented by a set of coordinates, for example, (X1, Y1), (X2, Y2) … … (Xn, yn), where n is an integer of 2 or more, and n represents the nth building. R1 … Rm, R2 … Rp, and Rn … Rq in table 1 represent the color moment data of the building at the (X1, Y1) position, wherein R is abbreviated as Red, and represents Red; g1 … Gm, G2 … Gp and B2 … Bp represent the color moment data of the building at the (X2, Y2) position, where G is Green for short, and represents Green; rn … Rq, gn … Gq and Bn … Bq represent the color moment data of the building at the (Xn, yn) position, wherein B is Blue for short, and represents Blue; in addition, m, p and q are integers.
Table 1 picture characteristic data table
(X1,Y1) R1…Rm;G1…Gm;B1…Bm
(X2,Y2) R2…Rp;G2…Gp;B2…Bp
…… ……
(Xn,Yn) Rn…Rq;Gn…Gq;Bn…Bq
S503, receiving a building information display request from the terminal.
S504, receiving the current position sent by the wearable device; or, sending a location information request to the wearable device to instruct the wearable device to send the current location to the server.
Specifically, when the current position of the wearable device is obtained, the current position sent by the wearable device in real time can be accepted, and the child phone watch can be notified to report the current position in an IM (Instant Messaging ) mode, namely, a position information request is sent to the wearable device, and the child phone watch sends the current position to the server after receiving the position information request.
And S505, searching in the pre-stored picture characteristic data to determine building data of a building within a preset distance from the current position of the wearable device as target building data.
S506, the target building data and the current position information of the wearable device are sent to the terminal, so that the terminal is instructed to determine the target building information, and the target building information is added to the corresponding video frame image to be displayed.
In the embodiment of the application, the acquisition equipment uploads the acquired building picture data to the server, the server stores the building picture data with preset quantity in the preset area in advance, and then extracts the color moment data of the building picture data as the picture feature data for storage, so that the picture feature data can be used for subsequent identification, retrieval and matching, and the matching accuracy is improved; in addition, after receiving the building information display request from the terminal, the server can acquire the current position of the wearable equipment in an active triggering or passive receiving mode, so that the accuracy of searching and matching can be improved by timely acquiring the position of the wearable equipment; and finally, the server sends the target building data and the current position information of the wearable equipment to the terminal so as to instruct the terminal to determine the target building information and add the target building information to the corresponding video frame image for display. The method and the device realize that the building conditions around the wearable equipment are displayed on the video call interface of the terminal in a visual mode in the video call process for parents to reference, so that the safety condition of the children can be known in time.
Fig. 6 shows a signaling flow chart of an information display method, referring to fig. 6, in which 61 denotes a terminal and 62 denotes a server.
S601, the terminal sends a building information display request to the server.
S602, the server acquires the current position of the wearable device.
And S603, searching in the pre-stored picture characteristic data by the server to determine building data of a building within a preset distance from the current position of the wearable device as target building data.
And S604, the server sends the target building data and the current position information of the wearable device to the terminal.
S605, the terminal extracts the image characteristics of the monitored video frame image as the characteristic data to be identified.
And S606, the terminal searches and matches the feature data to be identified with the target feature data in the target building data to determine the target building information.
S607, the terminal adds the target building information to the video frame image for display.
The detailed implementation manner of each step in the embodiments of the present application may refer to the descriptions in the other embodiments, which are not repeated here.
In the embodiment of the application, when the wearable device and the mobile phone carry out video call, a parent can know whether the environment of the wearable device is safe or not through the mobile phone, and particularly can display the environment conditions around the wearable device on the mobile phone in a visual mode for the parent to refer to while the mobile phone displays the street information of the current position of the wearable device.
On the basis of the above embodiments, fig. 7 is a schematic structural diagram of an information display device according to an embodiment of the present application, where the information display device is integrated in a terminal. Referring to fig. 7, the information display apparatus provided in this embodiment specifically includes: a request transmitting module 701, a data receiving module 702, a feature extracting module 703, a target building information determining module 704, and a display module 705.
The request sending module 701 is configured to send a building information display request to a server; the data receiving module 702 is configured to receive target building data and current location information of the wearable device, where the target building data includes building data of a building within a preset distance from a current location of the wearable device; a feature extraction module 703, configured to extract the image feature of the monitored video frame image as feature data to be identified; the target building information determining module 704 is configured to search and match the feature data to be identified with target feature data in the target building data, so as to determine target building information; and the display module 705 is used for adding the target building information to the video frame image for display.
The technical scheme adopted by the application has the following beneficial effects: the method comprises the steps that a building information display request is sent to a server through a terminal, then target building data sent by the server and current position information of the wearable equipment are received, the target building data comprise building data of buildings within a preset distance from the current position of the wearable equipment, and therefore the terminal does not need to consider all building data in the processing process, and the searching matching speed and searching matching accuracy are improved; in addition, the terminal extracts the monitored image features of the video frame image as feature data to be identified, retrieves and matches the feature data to be identified with target feature data in target building data to determine target building information, and finally adds the target building information to the video frame image for display, so that the parents can know the surrounding environment of the child in time in a visual mode, the parents can conveniently judge whether the current environment of the child is safe or not, and interaction experience is improved.
Further, the server stores the picture feature data of the buildings with the preset quantity in the preset area in advance.
Further, the target building information determining module 704 is specifically configured to:
calculating the retrieval matching degree of the feature data to be identified and the target feature data in the target building data;
and filtering a search mismatching result according to the search matching degree, and determining building information corresponding to target feature data in the result that the search matching degree meets a preset matching degree rule as target building information.
Further, the display module 705 is specifically configured to:
identifying a location of a target building and a name of the target building in the target building information;
calculating a position difference according to the current position of the wearable device and the position of the target building;
and adding the position difference and the name of the target building to the video frame image for display.
Further, the system also comprises a link connection establishment module, which is used for establishing video call link connection with the wearable device before sending the building information display request to the server.
Further, the feature data to be identified and the target feature data are both color moment data.
The information display device provided by the embodiment of the application can be used for executing the information display method provided by the embodiment, and has corresponding functions and beneficial effects.
On the basis of the above embodiments, fig. 8 is a schematic structural diagram of an information display device according to an embodiment of the present application, where the information display device is integrated in a server. Referring to fig. 8, the information display apparatus provided in this embodiment specifically includes: a request receiving module 801, a location acquiring module 802, a target building data determining module 803, and a data transmitting module 804.
Wherein, the request receiving module 801 is configured to receive a building information display request from a terminal; a location obtaining module 802, configured to obtain a current location of the wearable device; a target building data determining module 803, configured to search in the pre-stored picture feature data to determine building data of a building within a preset distance from a current position of the wearable device as target building data; the data sending module 804 is configured to send the target building data and the current location information of the wearable device to the terminal, so as to instruct the terminal to determine the target building information and add the target building information to the corresponding video frame image for display.
In the embodiment of the application, the server receives the building information display request from the terminal, then acquires the current position of the wearable equipment, and searches in the pre-stored picture feature data to determine the building data of the building within the preset distance from the current position of the wearable equipment as the target building data, so that the building data can be primarily screened, the building data of the building far away from the wearable equipment is removed, and the accuracy rate in determining the surrounding building is improved; in addition, the target building data and the current position information of the wearable device are sent to the terminal so as to instruct the terminal to determine the target building information and add the target building information to the corresponding video frame image for display. The environment conditions around the wearable equipment are visually displayed on the terminal in the video call process for parents to reference in a visual mode.
Further, the system further comprises a data storage module, before receiving the building information display request from the terminal, the system further comprises:
pre-storing the building picture data of the preset quantity in a preset area;
and extracting the characteristics of the building picture data as picture characteristic data for storage.
Further, the data storage module is specifically configured to:
and extracting color moment data of the building picture data as picture characteristic data to store.
Further, the location obtaining module 802 is specifically configured to:
receiving a current position sent by a wearable device; or (b)
A location information request is sent to the wearable device to instruct the wearable device to send the current location to the server.
The information display device provided by the embodiment of the application can be used for executing the information display method provided by the embodiment, and has corresponding functions and beneficial effects.
The embodiment of the application provides a terminal, and the information display device provided by the embodiment of the application can be integrated in the terminal. Fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present application. Referring to fig. 9, the terminal includes: a processor 90, a memory 91. The number of processors 90 in the terminal may be one or more, one processor 90 being taken as an example in fig. 9. The number of memories 91 in the terminal may be one or more, one memory 91 being taken as an example in fig. 9. The processor 90 and the memory 91 of the terminal may be connected by a bus or otherwise, in fig. 9 by way of example.
The memory 91 is a computer-readable storage medium, and may be used to store a software program, a computer-executable program, and program instructions/modules corresponding to the information display method according to any embodiment of the present application (for example, a request transmitting module 701, a data receiving module 702, a feature extracting module 703, a target building information determining module 704, and a display module 705 in the information display device). The memory 91 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal, etc. In addition, the memory 91 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 91 may further include memory located remotely from processor 90, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 90 executes various functional applications of the terminal and data processing by executing software programs, instructions and modules stored in the memory 91, i.e., implements the above-described information display method including: sending a building information display request to a server; receiving target building data and current position information of the wearable device, wherein the target building data comprise building data of buildings within a preset distance from the current position of the wearable device; extracting the image characteristics of the monitored video frame image as characteristic data to be identified; searching and matching the feature data to be identified with target feature data in target building data to determine target building information; and adding the target building information to the video frame image for display.
The terminal provided by the embodiment can be used for executing the information display method provided by the embodiment, and has corresponding functions and beneficial effects.
The embodiment of the application provides a server, and the information display device provided by the embodiment of the application can be integrated in the server. Fig. 10 is a schematic structural diagram of a server according to an embodiment of the present application. Referring to fig. 10, the server includes: processor 100, memory 110. The number of processors 100 in the server may be one or more, one processor 100 being taken as an example in fig. 10. The number of memories 110 in the server may be one or more, one memory 110 being taken as an example in fig. 10. The processor 100 and the memory 110 of the server may be connected by a bus or otherwise, for example in fig. 10.
The memory 110 is used as a computer readable storage medium for storing a software program, a computer executable program, and modules, and is configured to store program instructions/modules corresponding to the information display method according to any embodiment of the present application (for example, a request receiving module 801, a position acquiring module 802, a target building data determining module 803, and a data transmitting module 804 in the information display device). The memory 110 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the server, etc. In addition, memory 110 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 110 may further include memory located remotely from processor 100, which may be connected to a server via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 100 executes various functional applications of the server and data processing by executing software programs, instructions and modules stored in the memory 110, i.e., implements the above-described information display method including: receiving a building information display request from a terminal; acquiring the current position of the wearable device; searching in the pre-stored picture characteristic data to determine building data of a building within a preset distance from the current position of the wearable equipment as target building data; and sending the target building data and the current position information of the wearable equipment to the terminal so as to instruct the terminal to determine the target building information and add the target building information to the corresponding video frame image for display.
The server provided by the embodiment can be used for executing the information display method provided by the embodiment, and has corresponding functions and beneficial effects.
The embodiment of the present application also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing an information display method applied to a terminal, comprising: sending a building information display request to a server; receiving target building data and current position information of the wearable device, wherein the target building data comprise building data of buildings within a preset distance from the current position of the wearable device; extracting the image characteristics of the monitored video frame image as characteristic data to be identified; searching and matching the feature data to be identified with target feature data in target building data to determine target building information; and adding the target building information to the video frame image for display.
The computer executable instructions, when executed by a computer processor, are also for performing an information display method, the information display method being applied to a server, comprising: receiving a building information display request from a terminal; acquiring the current position of the wearable device; searching in the pre-stored picture characteristic data to determine building data of a building within a preset distance from the current position of the wearable equipment as target building data; and sending the target building data and the current position information of the wearable equipment to the terminal so as to instruct the terminal to determine the target building information and add the target building information to the corresponding video frame image for display.
Storage media-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, lanbas (Rambus) RAM, etc.; nonvolatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a second, different computer system connected to the first computer system through a network such as the internet. The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations (e.g., in different computer systems connected by a network). The storage medium may store program instructions (e.g., embodied as a computer program) executable by one or more processors.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present application is not limited to the information display method described above, and may also perform the relevant operations in the information display method provided in any embodiment of the present application.
The information display apparatus, the storage medium, and the device provided in the above embodiments may perform the information display method provided in any embodiment of the present application, and technical details not described in detail in the above embodiments may be referred to the information display method provided in any embodiment of the present application.
Note that the above is only a preferred embodiment of the present application and the technical principle applied. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, while the application has been described in connection with the above embodiments, the application is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the application, which is set forth in the following claims.

Claims (13)

1. An information display method applied to a terminal, comprising:
establishing video call link connection between a terminal and wearable equipment;
sending a building information display request to a server;
receiving target building data and current position information of a wearable device, wherein the target building data comprise building data of a building within a preset distance from the current position of the wearable device, and the target building data are sent by a server;
extracting the image characteristics of the video frame images monitored in the video call process as characteristic data to be identified;
searching and matching the characteristic data to be identified with target characteristic data in the target building data to determine target building information;
adding the target building information to the video frame image for display, wherein the method comprises the following steps of: and identifying the position of the target building and the name of the target building in the target building information, calculating a position difference according to the current position of the wearable equipment and the position of the target building, and adding the position difference and the name of the target building to the video frame image for display.
2. The method according to claim 1, wherein the server stores in advance picture feature data of a preset number of buildings in a preset area.
3. The method of claim 1, wherein the retrieving matching the feature data to be identified with target feature data in the target building data to determine target building information comprises:
calculating the retrieval matching degree of the feature data to be identified and the target feature data in the target building data;
and filtering a search mismatching result according to the search matching degree, and determining building information corresponding to target feature data in the result that the search matching degree meets a preset matching degree rule as target building information.
4. A method according to any one of claims 1-3, wherein the characteristic data to be identified and the target characteristic data are both color moment data.
5. An information display method, comprising:
after the terminal and the wearable equipment are connected by a video call link, the server receives a building information display request from the terminal;
acquiring the current position of the wearable device;
searching in the pre-stored picture characteristic data to determine building data of a building within a preset distance from the current position of the wearable equipment as target building data;
The target building data and the current position information of the wearable equipment are sent to a terminal so as to instruct the terminal to determine target building information and add the target building information to a corresponding video frame image in the video call process on the terminal for display, wherein the method comprises the following steps: and sending the position of the target building and the name of the target building in the target building information to a terminal so as to instruct the terminal to calculate a position difference according to the current position of the wearable equipment and the position of the target building, and adding the position difference and the name of the target building to the video frame image for display.
6. The method of claim 5, wherein prior to receiving the building information display request from the terminal, further comprising:
pre-storing the building picture data of the preset quantity in a preset area;
and extracting the characteristics of the building picture data as picture characteristic data to store.
7. The method of claim 6, wherein the extracting features of the building picture data for storage as picture feature data comprises:
and extracting the color moment data of the building picture data as picture characteristic data to store.
8. The method of claim 5, wherein the obtaining the current location of the wearable device comprises:
receiving a current position sent by a wearable device; or (b)
A location information request is sent to a wearable device to instruct the wearable device to send a current location to a server.
9. An information display device, comprising:
the link connection establishment module is used for establishing video call link connection with the wearable equipment;
the request sending module is used for sending a building information display request to the server;
the system comprises a data receiving module, a data processing module and a data processing module, wherein the data receiving module is used for receiving target building data and current position information of a wearable device, wherein the target building data is sent by a server, and the target building data comprises building data of a building within a preset distance from the current position of the wearable device;
the feature extraction module is used for extracting the image features of the video frame images monitored in the video call process as feature data to be identified;
the target building information determining module is used for carrying out retrieval matching on the characteristic data to be identified and target characteristic data in the target building data so as to determine target building information;
The display module is used for adding the target building information to the video frame image for display, and is specifically used for: and identifying the position of the target building and the name of the target building in the target building information, calculating a position difference according to the current position of the wearable equipment and the position of the target building, and adding the position difference and the name of the target building to the video frame image for display.
10. An information display device, comprising:
the request receiving module is used for receiving a building information display request from the terminal;
the position acquisition module is used for acquiring the current position of the wearable equipment;
the target building data determining module is used for searching in the pre-stored picture characteristic data to determine building data of a building within a preset distance from the current position of the wearable equipment as target building data;
the data sending module is used for sending the target building data and the current position information of the wearable equipment to a terminal so as to instruct the terminal to determine the target building information and add the target building information to a corresponding video frame image for display, and is specifically used for: the method comprises the steps of sending the position of a target building in target building information and the name of the target building to a terminal to instruct the terminal to calculate a position difference according to the current position of the wearable device and the position of the target building, and adding the position difference and the name of the target building to a video frame image for display, wherein the terminal establishes video call link connection with the wearable device, and the video frame image is a display image in the video call process.
11. A terminal, comprising:
a memory and one or more processors;
the memory is used for storing one or more programs;
when executed by the one or more processors, causes the one or more processors to implement the information display method of any of claims 1-4.
12. A server, comprising:
a memory and one or more processors;
the memory is used for storing one or more programs;
when executed by the one or more processors, causes the one or more processors to implement the information display method of any of claims 5-8.
13. A computer-readable storage medium containing computer-executable instructions, which, when executed by a computer processor, are for performing the information display method of any of claims 1-8.
CN201911108492.5A 2019-11-13 2019-11-13 Information display method, corresponding device, terminal, server and storage medium Active CN111046234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911108492.5A CN111046234B (en) 2019-11-13 2019-11-13 Information display method, corresponding device, terminal, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911108492.5A CN111046234B (en) 2019-11-13 2019-11-13 Information display method, corresponding device, terminal, server and storage medium

Publications (2)

Publication Number Publication Date
CN111046234A CN111046234A (en) 2020-04-21
CN111046234B true CN111046234B (en) 2023-09-19

Family

ID=70232744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911108492.5A Active CN111046234B (en) 2019-11-13 2019-11-13 Information display method, corresponding device, terminal, server and storage medium

Country Status (1)

Country Link
CN (1) CN111046234B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102980570A (en) * 2011-09-06 2013-03-20 上海博路信息技术有限公司 Live-scene augmented reality navigation system
CN103888900A (en) * 2014-02-21 2014-06-25 毛蔚青 Automatic identification method based on building or geographic area of mobile terminal
CN105160327A (en) * 2015-09-16 2015-12-16 小米科技有限责任公司 Building identification method and device
CN107608202A (en) * 2017-09-12 2018-01-19 合肥矽智科技有限公司 A kind of children open air location watch microrobot
CN108507541A (en) * 2018-03-01 2018-09-07 广东欧珀移动通信有限公司 Building recognition method and system and mobile terminal
CN110022528A (en) * 2019-05-24 2019-07-16 广东小天才科技有限公司 A kind of position information display method and device based on video calling
CN110177240A (en) * 2019-03-27 2019-08-27 广东小天才科技有限公司 A kind of video call method and wearable device of wearable device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102980570A (en) * 2011-09-06 2013-03-20 上海博路信息技术有限公司 Live-scene augmented reality navigation system
CN103888900A (en) * 2014-02-21 2014-06-25 毛蔚青 Automatic identification method based on building or geographic area of mobile terminal
CN105160327A (en) * 2015-09-16 2015-12-16 小米科技有限责任公司 Building identification method and device
CN107608202A (en) * 2017-09-12 2018-01-19 合肥矽智科技有限公司 A kind of children open air location watch microrobot
CN108507541A (en) * 2018-03-01 2018-09-07 广东欧珀移动通信有限公司 Building recognition method and system and mobile terminal
CN110177240A (en) * 2019-03-27 2019-08-27 广东小天才科技有限公司 A kind of video call method and wearable device of wearable device
CN110022528A (en) * 2019-05-24 2019-07-16 广东小天才科技有限公司 A kind of position information display method and device based on video calling

Also Published As

Publication number Publication date
CN111046234A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
US10937249B2 (en) Systems and methods for anchoring virtual objects to physical locations
TWI546519B (en) Method and device for displaying points of interest
CN108897468B (en) Method and system for entering into virtual three-dimensional space panorama of house source
Panteras et al. Triangulating social multimedia content for event localization using Flickr and Twitter
CN110298269B (en) Scene image positioning method, device and equipment and readable storage medium
CN110645986A (en) Positioning method and device, terminal and storage medium
EP3190581B1 (en) Interior map establishment device and method using cloud point
CN103685960A (en) Method and system for processing image with matched position information
EP2672401A1 (en) Method and apparatus for storing image data
WO2012126381A1 (en) Device and method for obtaining shared object related to real scene
KR20140102181A (en) Information processing device, information processing method and program
CN103107938A (en) Information interactive method, server and terminal
US9288636B2 (en) Feature selection for image based location determination
CN104484814A (en) Advertising method and system based on video map
CN108846899B (en) Method and system for improving area perception of user for each function in house source
WO2020259360A1 (en) Locating method and device, terminal, and storage medium
US8655883B1 (en) Automatic detection of similar business updates by using similarity to past rejected updates
RU2622843C2 (en) Management method of image processing device
EP4184348A1 (en) Information retrieval method and apparatus and electronic device
WO2018103544A1 (en) Method and device for presenting service object data in image
CN111046234B (en) Information display method, corresponding device, terminal, server and storage medium
CN107221030B (en) Augmented reality providing method, augmented reality providing server, and recording medium
WO2015069560A1 (en) Image based location determination
EP3300020A1 (en) Image based location determination
US20150379040A1 (en) Generating automated tours of geographic-location related features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant