CN112527117A - Map display method, computer equipment and readable storage medium - Google Patents

Map display method, computer equipment and readable storage medium Download PDF

Info

Publication number
CN112527117A
CN112527117A CN202011484131.3A CN202011484131A CN112527117A CN 112527117 A CN112527117 A CN 112527117A CN 202011484131 A CN202011484131 A CN 202011484131A CN 112527117 A CN112527117 A CN 112527117A
Authority
CN
China
Prior art keywords
user
target content
map display
face
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011484131.3A
Other languages
Chinese (zh)
Inventor
李萌迪
谭述安
李承泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tiya Digital Technology Co ltd
Original Assignee
Shenzhen Tiya Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tiya Digital Technology Co ltd filed Critical Shenzhen Tiya Digital Technology Co ltd
Priority to CN202011484131.3A priority Critical patent/CN112527117A/en
Publication of CN112527117A publication Critical patent/CN112527117A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/26Visual data mining; Browsing structured data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention is applicable to the field of computers, and provides a map display method applied to a large screen, computer equipment and a readable storage medium, wherein the method comprises the following steps: acquiring real-time face dynamic information of a user; determining whether to output target content according to the real-time face dynamic information of the user; and if the target content is determined to be output, outputting the target content on a map display interface, and adjusting the display mode of the target content according to the real-time face dynamic information of the user. According to the scheme, the large screen is triggered through the real-time face dynamic information of the user to display the target content and regulate and control the display state, the user operation is greatly facilitated undoubtedly for the large screen which cannot be directly contacted or can not be contacted with the full screen, the user does not need to assist in control through other terminals such as a PC (personal computer) and a mobile terminal, and the control mode is more intuitive and reliable.

Description

Map display method, computer equipment and readable storage medium
Technical Field
The invention belongs to the field of computers, and particularly relates to a map display method, computer equipment and a readable storage medium.
Background
An Electronic map (digital map) is a map that is digitally stored and referred to using computer technology. Specifically, the electronic map is a system for map making and application, is a map generated by the control of an electronic computer, is a screen map based on a digital cartographic technology, and is a visual real map.
In the fields of urban public management, geographic monitoring and the like, it is a common practice to display a map through a large screen for analysis by managers and professionals.
However, at present, because the large screen is generally very large in size, some of the large screens are equivalent to the wall surface of a display hall, the control of the map on the large screen is basically realized on a terminal, and a worker cannot directly interact with the large screen, so that the operation is very inconvenient.
Disclosure of Invention
The embodiment of the invention provides a map display method applied to a large screen, and aims to solve the problems that the control of a map on the large screen is basically realized on a terminal, a worker cannot directly interact with the large screen, and the operation is very inconvenient.
The embodiment of the invention is realized in such a way that a map display method applied to a large screen comprises the following steps:
acquiring real-time face dynamic information of a user;
determining whether to output target content according to the real-time face dynamic information of the user;
and if the target content is determined to be output, outputting the target content on a map display interface, and adjusting the display mode of the target content according to the real-time face dynamic information of the user.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer program is stored in the storage medium, and when the computer program is executed by a processor, the processor is enabled to execute the steps of the map display method applied to the large screen.
An embodiment of the present invention further provides a computer device, including: a memory and a processor;
the memory has stored therein a computer program;
the computer program, when executed by the processor, causes the processor to perform the steps of the map display method applied to a large screen.
According to the map display method applied to the large screen, the technical scheme that the user triggers the large screen to display the target content and regulate and control the display state through the real-time face dynamic information of the user is utilized, the user operation is certainly and greatly facilitated for the large screen which cannot be directly contacted or can not be contacted with the full screen, the user does not need to assist in operation through other terminals (such as a PC (personal computer), a mobile terminal and the like), and the operation mode is more intuitive and reliable.
Drawings
Fig. 1 is an implementation environment diagram of a map display method applied to a large screen according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for setting a virtual touch sensing area according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another. For example, a first xx script may be referred to as a second xx script, and similarly, a second xx script may be referred to as a first xx script, without departing from the scope of the present application.
The embodiment of the invention provides a map display method applied to a large screen, which determines target content to be output through real-time face dynamic information of a user and adjusts a display mode of the target content, and greatly facilitates interaction between the user and the large screen map through the map display mode.
Fig. 1 is an implementation environment diagram of a map display method applied to a large screen as an embodiment of the present invention, and the map display method according to the embodiment is mainly applied to a large and medium display device/system, the large and medium display device/system generally includes a large screen 110 and a display control system 120 connected to the large screen, the display control system generally includes a related display driving apparatus, a core processor, and a network module for interfacing external data; meanwhile, the system also comprises an image acquisition device/equipment (such as a camera) for acquiring the real-time face dynamic information of the user, and a processing device for tracking and analyzing the real-time face dynamic information of the user, wherein the processing device can be the core processor or a separate module arranged additionally.
In one case, in the present application, for the large screen 110, the size (length and width) of the so-called large screen is greater than 2m × 1.5m, and the solution of the present embodiment has a large application value in an application scene of the large screen, which can greatly facilitate user operations.
In one case, the display control system 120 may also be a computer device, where the computer device may be an independent physical server or terminal, or a server cluster formed by a plurality of physical servers, or a cloud server that provides basic cloud computing services such as a cloud server, a cloud database, a cloud storage, and a CDN.
In one case, the image capturing devices/apparatuses are generally provided in multiple numbers, the number and arrangement range of the image capturing devices/apparatuses are set according to the range and precision of detection required, the image capturing devices/apparatuses can be distributed around a large screen, or the image capturing devices/apparatuses are arranged in a user control area to monitor a user in real time; if necessary, a plurality of sensors for monitoring the face position information of the user can be arranged, such as an infrared detection device (remote detection can be carried out, and the face orientation can also be detected by combining with image recognition software); in some cases, a wearing assembly (and configured with a user head position tracking device to facilitate detection of the user's head/face position) may also be provided.
As an embodiment, fig. 2 shows a flowchart of a map displaying method applied to a large screen, and for convenience of explanation, the diagram only shows the contents related to the embodiment of the present invention, and the method is described below in conjunction with the application of the method to the large and medium display device/system (specifically, the internal display control system 120), and is detailed as follows:
step S202, acquiring real-time face dynamic information of the user.
In one embodiment, the process of acquiring real-time facial dynamic information of a user comprises the following steps:
collecting a user face video to generate a user face video stream;
and acquiring the real-time face dynamic information of the user from the face video stream of the user.
In the embodiment, the system calls the related control of image acquisition to control the image acquisition device/equipment to acquire the user face video, and the system acquires the user real-time face dynamic information which meets the setting from the user face video based on the image and video tracking technology.
In one embodiment, the user real-time facial dynamics information primarily describes: the relative position of the user face and the map display interface, the eyeball dynamic of the user, the face/eyeball dwell time of the user and the like.
In one embodiment, the relative position of the user plane and the map presentation interface comprises: the distance between the user's face and the map presentation interface, the orientation of the user's face relative to the map presentation interface, and the like.
In one embodiment, the user face/eye dwell time refers to the duration of a user's gesture of uninterrupted gaze at the map presentation interface; in particular, the length of the retention time of the face/eyeball of the general user can indicate the attention degree of the user to the content of the watched area, and the trigger event based on which some map content is set is reliable.
In one embodiment, the map display interface refers to a large screen of a large screen device, i.e., a display screen.
And step S204, determining whether to output the target content according to the real-time face dynamic information of the user.
In one embodiment, for step S204, it specifically is: determining target content associated with the real-time facial dynamic information of the user according to a preset information processing model; the preset information processing model comprises an incidence relation between preset face dynamic information and contents to be output; when the real-time face dynamic information of the user is input into the preset information processing model, acquiring corresponding content to be output based on the incidence relation; and determining the corresponding content to be output as the target content.
In one embodiment, the information processing model is a series of formula sets, and is packaged into a module with data input, processing and output according to actual needs, wherein the module can be a software module, a hardware module or a module combining software and hardware.
In one embodiment, the preset information processing model is a database, and the database contains preset face dynamic information and contents to be output, and an association relationship between the face dynamic information and the contents to be output, where the preset face dynamic information includes: the relative position of the user face and the map display interface, the eyeball dynamic of the user and the face/eyeball staying time of the user; wherein, the relative position of the user surface and the map display interface comprises: the distance between the user face and the map display interface and the orientation of the user face relative to the map display interface; the user face/eye dwell time refers to the duration of time that the user is continuously gazing at the pose of the map presentation interface.
In this embodiment, after the system collects and identifies the real-time facial dynamic information of the user, the real-time facial dynamic information of the user is input into the preset information processing model, the preset information processing model searches in the database by taking the real-time facial dynamic information of the user as key information, if matched facial dynamic information is searched, the associated content to be output is obtained, and the content to be output is taken as the target content.
In a preferred embodiment, to facilitate user operation, the associated operations may be implemented based on user eye movements; based on this, the preset information processing model further executes the following steps:
if the face/eyeball dwell time of the user for the target area on the map display interface exceeds a set threshold, determining any one or more of the map amplification content, the specific geographic position, the geographic landscape description, the human style information, the policy information, the travel information and the shopping navigation information of the target area or the content formed according to a set rule as the target content.
In this embodiment, a triggering condition for outputting content is set, where the content formed according to the set rule actually refers to any information related to a target area watched by a user, and may be specifically preset according to an application environment and an actual requirement of the large screen, and is triggered when the triggering condition of this embodiment is reached, for example, the target content may be a search box (e.g., a hundred-degree search box, a google search box), an interface/entry of specified software, a database, a specific file/photo/video, and the like.
In one case of the embodiment, the target content is map enlargement content of the target area, and in this case, a part of the map watched by the user is enlarged and displayed, so that the user can observe the map conveniently.
In one case of this embodiment, the specific geographic location is the specific address, latitude and longitude, coordinates, etc. of the area at which the user is gazing.
In one aspect of the present embodiment, the geographic profile refers to an introduction of geological features of the area at which the user is gazing, scientific investigation content, and the like.
In one aspect of this embodiment, the information about the human-written style, the policy information, the travel information, the shopping navigation information, and the like may be presented in a form of text, or the like, or may be content in an interface of a program (which may be a built-in module, or an external program, such as panning, microblog, and the like).
The content can be triggered under the condition of meeting the conditions and output to the user, the use is convenient, particularly in the application scene of a large screen, the user cannot directly contact all areas of the large screen, the related control is realized through the dynamic identification of the face of the user, and the use and the operation of the user are greatly facilitated.
It can be understood that, in the above embodiments, the generation and output of the content is triggered by the face/eyeball dwell time of the user for the target area on the map display interface; the triggering can be actually carried out in other ways, for example, the relative position of the face of the user and the map display interface, the eyeball dynamic of the user, and the like can be used as the triggering conditions.
In an embodiment, the specific geographic location, the geographic landscape description, the cultural landscape information, the policy information, the travel information, the shopping navigation information, and other information may be pre-stored in a local database, such as the database in the preset information processing model; in another case, the information can be obtained by networking and real-time searching; specifically, the system prestores some basic information and search conditions, and when the output condition of the target content is triggered, the system forms a keyword to be searched according to the basic information and the search conditions, calls a search program to search, and outputs the searched content as the target content. For example, the basic information is geographical location information, and when the geographical location watched by the user is Shenzhen Futian area lotus mountain and the preset search content is tour information, the system automatically searches for the content of Shenzhen Futian area lotus mountain related to tour through a search function (such as a Baidu search box/plugin) and outputs the content as the target content.
And step S206, if the target content is determined to be output, outputting the target content on a map display interface, and adjusting the display mode of the target content according to the real-time face dynamic information of the user.
In one case of this embodiment, the target content may be directly output on the map display interface, or may be output on another interface, for example, another external device.
In a case of this embodiment, the outputting the target content on the map display interface includes: surfacing the target content directly on the map display interface; or displaying the target content on the map display interface by a pop-up component.
In one case, the ejection assembly is: web pages, message boxes, or floating windows.
In one case of this embodiment, after the user triggers and generates the target content through the user real-time facial dynamic information, the display mode of the target content may be further controlled through the subsequent user real-time facial dynamic information.
Specifically, the adjusting the display mode of the target content according to the real-time facial dynamic information of the user includes:
zooming in or zooming out the target content according to the distance between the user face and the map display interface and the change of the distance;
adjusting the display content of the focus area of the target content according to the orientation and the change of the user face relative to the map display interface, or turning pages or scrolling the target content;
and adjusting the display tone or the display brightness of the display content according to the eyeball dynamic of the user.
The adjustment modes are all to adjust the display of the target content according to the real-time dynamic (expression or posture) of the face of the user when the user pays attention to the map display interface, and the purpose is to facilitate the operation of the user on the map display content of the large-screen display screen.
In one embodiment, a user usually involves more data in the actual operation process, and when the data are displayed on a screen, the related contents are often mixed and disordered, and for this, the method of the present application further includes:
and when the real-time face dynamic information of the user triggers multiple groups of target contents, generating multiple groups of corresponding note buttons, wherein the multiple groups of note buttons are sequentially arranged in a specified area on the map display interface according to the triggering sequence of the multiple groups of target contents.
Therefore, a user can select the target content positioned at the topmost layer of the display interface through the notes, wherein the topmost layer is a layer with the highest display priority on the map display interface and cannot be blocked by other content; when the number of target contents is further increased, notes may be grouped, and the notes of each group may be stored in a menu form.
The user may subsequently select the target content displayed on the top layer in the form of a trigger tag, where the trigger mode may be triggered by a gesture, or by real-time facial dynamic information of the user, or by voice, and is not limited in particular.
In the embodiment of the application, the technical scheme that the large screen is triggered to display the target content and regulate and control the display state through the real-time face dynamic information of the user is provided, the user operation is certainly greatly facilitated for the large screen which cannot be directly contacted or can not be contacted with the full screen, the user does not need to assist in control through other terminals (such as a PC (personal computer), a mobile terminal and the like), and the control mode is more intuitive and reliable.
FIG. 3 illustrates an internal block diagram of one embodiment of a computer device; the computer device may specifically be the display control system in the above embodiment of the present invention.
The computer device comprises a processor, a memory, a network interface, an input device and a display screen which are connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program, which, when executed by the processor, may cause the processor to implement the map display method applied to a large screen of the present invention. The internal memory may also store a computer program, and when the computer program is executed by the processor, the computer program may enable the processor to execute the map display method applied to the large screen according to the present invention. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 3 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment of the present invention, a computer device is provided, where the computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the steps of the map display method applied to a large screen according to the embodiments of the present application described in the present specification.
In an embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, causes the processor to execute the steps of the map display method applied to a large screen according to the embodiments of the present application described in the present specification.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A map display method applied to a large screen is characterized by comprising the following steps:
acquiring real-time face dynamic information of a user;
determining whether to output target content according to the real-time face dynamic information of the user;
and if the target content is determined to be output, outputting the target content on a map display interface, and adjusting the display mode of the target content according to the real-time face dynamic information of the user.
2. The map presentation method of claim 1, wherein said obtaining real-time facial dynamic information of the user comprises:
collecting a user face video;
acquiring real-time face dynamic information of the user from the face video of the user;
wherein the real-time face dynamic information of the user is any one or more of the following contents: the relative position of the user face and the map display interface, the eyeball dynamic of the user and the face/eyeball staying time of the user;
the relative position of the user surface and the map display interface comprises: the distance between the user face and the map display interface and the orientation of the user face relative to the map display interface;
the user face/eye dwell time refers to the duration of time that the user is continuously gazing at the pose of the map presentation interface.
3. The map presentation method of claim 2, wherein said determining whether to output target content based on the user real-time facial dynamics information comprises:
determining target content associated with the real-time facial dynamic information of the user according to a preset information processing model;
the preset information processing model comprises an incidence relation between preset face dynamic information and contents to be output; when the real-time face dynamic information of the user is input into the preset information processing model, acquiring corresponding content to be output based on the incidence relation; and determining the corresponding content to be output as the target content.
4. The map display method of claim 3, wherein the determining the target content associated with the real-time facial dynamic information of the user according to a preset information processing model comprises:
if the face/eyeball dwell time of the user for the target area on the map display interface exceeds a set threshold, determining any one or more of the map amplification content, the specific geographic position, the geographic landscape description, the human style information, the policy information, the travel information and the shopping navigation information of the target area or the content formed according to a set rule as the target content.
5. The map display method of claim 3, wherein the outputting the target content at the map display interface comprises:
surfacing the target content directly on the map display interface; or
And displaying the target content on the map display interface by a pop-up component.
6. The map display method of claim 5, wherein the pop-up component is: web pages, message boxes, or floating windows.
7. The map display method of claim 5, wherein the adjusting the display mode of the target content according to the real-time face dynamic information of the user comprises:
zooming in or zooming out the target content according to the distance between the user face and the map display interface and the change of the distance;
adjusting the display content of the focus area of the target content according to the orientation and the change of the user face relative to the map display interface, or turning pages or scrolling the target content;
and adjusting the display tone or the display brightness of the display content according to the eyeball dynamic of the user.
8. A computer device, comprising: a memory and a processor;
the memory has stored therein a computer program;
the computer program, when executed by the processor, causes the processor to perform the steps of the map presentation method applied to a large screen according to any one of claims 1 to 7.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program, when being executed by a processor, causes the processor to carry out the steps of the map display method applied to a large screen according to any one of claims 1 to 7.
CN202011484131.3A 2020-12-15 2020-12-15 Map display method, computer equipment and readable storage medium Withdrawn CN112527117A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011484131.3A CN112527117A (en) 2020-12-15 2020-12-15 Map display method, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011484131.3A CN112527117A (en) 2020-12-15 2020-12-15 Map display method, computer equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN112527117A true CN112527117A (en) 2021-03-19

Family

ID=75000636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011484131.3A Withdrawn CN112527117A (en) 2020-12-15 2020-12-15 Map display method, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112527117A (en)

Similar Documents

Publication Publication Date Title
US10147399B1 (en) Adaptive fiducials for image match recognition and tracking
KR101137041B1 (en) Controlling a document based on user behavioral signals detected from a 3d captured image stream
CN109657533A (en) Pedestrian recognition methods and Related product again
US20160224591A1 (en) Method and Device for Searching for Image
CN110598559B (en) Method and device for detecting motion direction, computer equipment and storage medium
US20140223319A1 (en) System, apparatus and method for providing content based on visual search
US20230089622A1 (en) Data access control for augmented reality devices
US20180150683A1 (en) Systems, methods, and devices for information sharing and matching
CN112749655A (en) Sight tracking method, sight tracking device, computer equipment and storage medium
US20180181596A1 (en) Method and system for remote management of virtual message for a moving object
JPWO2013024667A1 (en) Interest point extraction apparatus, interest point extraction method, and program
CN117455989A (en) Indoor scene SLAM tracking method and device, head-mounted equipment and medium
CN112527117A (en) Map display method, computer equipment and readable storage medium
CN113987326B (en) Resource recommendation method and device, computer equipment and medium
CN110659376A (en) Picture searching method and device, computer equipment and storage medium
CN114998102A (en) Image processing method and device and electronic equipment
CN112527116A (en) AR technology-based large-screen map display system
CN115729544A (en) Desktop component generation method and device, electronic equipment and readable storage medium
CN115309487A (en) Display method, display device, electronic equipment and readable storage medium
US20150286280A1 (en) Information processing method and information processing device
CN114881060A (en) Code scanning method and device, electronic equipment and readable storage medium
CN118051150A (en) Display method and device and electronic equipment
CN116643818A (en) Image interaction processing method, device, equipment and storage medium
CN117528179A (en) Video generation method and device
CN115221995A (en) Information generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210319

WW01 Invention patent application withdrawn after publication