CN113411654A - Information display method, equipment and positioning system - Google Patents

Information display method, equipment and positioning system Download PDF

Info

Publication number
CN113411654A
CN113411654A CN202010188668.9A CN202010188668A CN113411654A CN 113411654 A CN113411654 A CN 113411654A CN 202010188668 A CN202010188668 A CN 202010188668A CN 113411654 A CN113411654 A CN 113411654A
Authority
CN
China
Prior art keywords
time
condition
moving object
video
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010188668.9A
Other languages
Chinese (zh)
Inventor
赵瑞祥
罗千科
裘有斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qing Yanxun Technology Beijing Co ltd
Original Assignee
Qing Yanxun Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qing Yanxun Technology Beijing Co ltd filed Critical Qing Yanxun Technology Beijing Co ltd
Priority to CN202010188668.9A priority Critical patent/CN113411654A/en
Publication of CN113411654A publication Critical patent/CN113411654A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses an information display method, information display equipment and a positioning system, and relates to the technical field of information interaction. The method comprises the steps that time identification information can be displayed on an interactive interface of the video, and the time identification information can be used for indicating the time point when the position record of the mobile object meets the first condition or the time period when the position record of the mobile object meets the first condition, so that a user can know the time point or the time period when the position record of the mobile object meets the first condition in time, and can quickly position a specific position to be checked according to the time point or the time period to shoot the video, the monitoring efficiency and the monitoring accuracy are improved, and the watching experience of the user is improved.

Description

Information display method, equipment and positioning system
Technical Field
The present application relates to the field of information interaction technologies, and in particular, to an information display method, device, and positioning system.
Background
With the wider application of video monitoring, many government agencies and enterprises deploy their own video monitoring networks, and for large video monitoring networks, monitoring points (such as cameras) distributed everywhere need to be monitored in real time. Nowadays, more and more monitoring applications require intelligent tracking monitoring of moving objects (such as people or moving objects) that travel to different monitoring points.
In the prior art, there is an information display method, that is, when a video is recorded, a mapping from a video time point to a track progress bar is formed, and then a mapping from the track progress bar to the video time point is formed, so that when the track progress bar of a recording place is dragged, the playing progress of the video is correspondingly changed. The above scheme requires generating a track progress bar, and the screen area occupied by the track progress bar is usually large; particularly, if the number of video recording devices is increased or the number of mobile users is increased, although the tracks of the users are clear at a glance, the time when a plurality of users reach a certain position may be different, and whether one user and another user pass through a certain position at the same time cannot be visually seen on a static track progress bar, so that the user experience of finding a corresponding video through the interactive mode is further reduced.
In other prior art, a positioning base station collects UWB signals emitted from UWB positioning tag signals attached to goods, packages the received positioning signals with time stamps, and transmits the packaged positioning signals to an information processing subsystem; the position of the positioning label is determined through the positioning signal, and the positioning label is compared with the shooting range of the video camera to judge which camera the positioning label should be positioned under the monitoring of, so that the video corresponding to the goods is shot when the corresponding time T1 is found out quickly. However, in the above scheme, the user needs to determine the user or time before searching, the searching efficiency is low, and the user is only limited to searching for a certain user.
In addition, the existing video playing mode can cause that a user cannot know the positions of the moving objects at different times in time, the user can not quickly locate the videos to be checked by memorizing or inquiring other persons familiar with the shooting area of the monitoring point according to the monitoring video pictures, the efficiency is low, errors are easy to occur, and the user searching efficiency and the user experience are influenced.
Disclosure of Invention
In view of the above, in order to improve the technical problem, the present application provides an information display method, an information display apparatus, and a positioning system.
In one aspect, the present application provides an information display method, including:
generating first time identification information according to the position record and the video recording position of the mobile object; displaying a first time identifier on an interactive interface, wherein the first time identifier information is used for indicating a time point of the position record meeting a first condition or a time period of the position record meeting the first condition; the first time identifier corresponds to the first time identifier information.
Optionally, the first time identifier is displayed in a progress bar area or an area having a mapping relationship with the progress bar.
Optionally, the first condition includes one or a combination of several ways that the distance between the first moving object and the second moving object satisfies a third condition a; or, the relationship of the first moving object to the first position satisfies a third condition B; or, the density of moving objects within the first area satisfies a third condition C; or the moving speed of the first moving object meets a third condition D, or the relation between the position record of the moving object and the time meets a third condition E; or, the trajectory of the first moving object meets the third condition F.
Optionally, displaying a second time identifier on the interactive interface, wherein the second time identifier is used for indicating a time point when the position record of the second mobile object meets a second condition or a time period meeting the second condition; the second time stamp is distinct from the first time stamp.
Optionally, in the interactive interface, the first time identifier and the second time identifier are arranged according to a time sequence; the first time mark and the second time mark are displayed in the same area, or the first time mark and the second time mark are respectively displayed in different areas.
Optionally, if the first time identifier is overlapped with the second time identifier, displaying a third time identifier at the overlapped position; the third time identifier may trigger playing of video content corresponding to the first time identifier and/or the second time identifier, respectively.
Optionally, obtaining a first time through the interactive interface; obtaining position prompt information of a third moving object according to the first time; displaying a position prompt on the interactive interface; the position prompt information indicates the position of a third moving object at the first time; the position prompt information corresponds to the position prompt.
Optionally, the obtaining of the first time through the interactive interface includes obtaining a position indicated by a mouse pointer when a playing progress bar is dragged, where the indicated position is the first time; or, obtaining a position indicated when the mouse pointer floats on the playing progress bar, wherein the indicated position is the first time.
Optionally, the displaying the location prompt on the interactive interface includes displaying a thumbnail on the interactive interface, where the thumbnail indicates a location of the third mobile object at the first time; or displaying characters on the interactive interface, wherein the characters indicate the position of the mobile object at the first time.
Optionally, the first time identification information and/or the second time identification are/is displayed on a playing progress bar of the interactive interface; the first condition means that the first moving object is located in a first area; the second time mark is used for indicating a time point when the position record of the mobile object meets a second condition or a time period meeting the second condition; the second condition means that the second moving object is located in a second area;
displaying a condition configuration interface, configured to configure the first condition, the second condition, or select the third moving object through the condition configuration interface; the interactive interface comprises an interactive interface for playing a first video; the first time identifier corresponds to the first video; displaying the first time identifier in the progress bar area or displaying the first time identifier below the progress bar; the first time marks are arranged according to the time sequence; the display position of the first time identifier has a corresponding relation with the time meeting a first condition in the position record of the mobile object; when the mouse is suspended on the first time mark, the interactive interface displays prompt information corresponding to the first condition; clicking a progress bar by a mouse, and jumping to the time corresponding to the mouse clicking position from the playing position of the first video; clicking the first time identifier by a mouse, and correspondingly playing the video content corresponding to the first time identifier; the position is recorded as a sequence of positions and times.
In another aspect, the application provides another information display method, which includes displaying a first time identifier on an interactive interface, wherein the first time identifier is used for indicating a time point when the position record of the mobile object meets a first condition or a time period meeting the first condition.
Optionally, the first time identifier is displayed in a progress bar area or an area having a mapping relationship with the progress bar.
Optionally, the first condition includes one or a combination of several ways that the distance between the first moving object and the second moving object satisfies a third condition a; or, the relationship of the first moving object to the first position satisfies a third condition B; or, the density of the positioning labels in the first area meets a third condition C; or the moving speed of the first moving object meets a third condition D, or the relation between the position record of the moving object and the time meets a third condition E; or, the trajectory of the first moving object meets the third condition F.
Optionally, displaying a second time identifier on the interactive interface, where the second time identifier is used to indicate a time point when the position record of the mobile object meets the second condition or a time period when the position record meets the second condition; the second time stamp is distinct from the first time stamp.
Optionally, in the interactive interface, the first time identifier and the second time identifier are arranged according to a time sequence; the first time mark and the second time mark are displayed in the same area, or the first time mark and the second time mark are respectively displayed in different areas.
Optionally, if the first time identifier is overlapped with the second time identifier, displaying a third time identifier at the overlapped position; the third time identifier may trigger playing of video content corresponding to the first time identifier and/or the second time identifier, respectively.
Optionally, the method further comprises obtaining a first time through the interactive interface; obtaining a position prompt of a third moving object according to the first time; displaying a position prompt on the interactive interface; the location hint indicates a location at which a third mobile object is located at the first time; the position prompt information is used for generating position prompts.
Optionally, the obtaining of the first time through the interactive interface includes obtaining a position indicated by a mouse pointer when a playing progress bar is dragged, where the indicated position is the first time; or, obtaining a position indicated when the mouse pointer floats on the playing progress bar, wherein the indicated time is the first time.
Optionally, the displaying the location prompt on the interactive interface includes displaying a thumbnail on the interactive interface, where the thumbnail indicates a location of the third mobile object at the first time; or displaying characters on the interactive interface, wherein the characters indicate the position of the mobile object at the first time.
Optionally, the first time identification information and/or the second time identification are/is displayed on a playing progress bar of the interactive interface; the first condition means that the first moving object is located in a first area; the second time mark is used for indicating a time point when the position record of the mobile object meets a second condition or a time period meeting the second condition; the second condition means that the second moving object is located in a second area; displaying a condition configuration interface for configuring the first condition, the second condition or selecting the third moving object through the condition configuration interface; the interactive interface comprises an interactive interface for playing a first video; the first time identification information corresponds to the first video; when the mouse is suspended on the first time mark, the interactive interface displays prompt information corresponding to a first condition; the mouse clicks the progress bar, and the playing position of the first video jumps to the time corresponding to the mouse clicking position; and clicking the first time identifier by a mouse, and correspondingly playing the video content corresponding to the first time identifier in the first video.
In another aspect, the present application provides an interaction information generating method, including:
recording a time point meeting a first condition or a time period meeting the first condition according to the position of the mobile object, and generating first time identification information; and sending the first time identification information to a client.
Optionally, the first condition includes one or a combination of several ways:
the distance between the first moving object and the second moving object meets a third condition A; or, the relation of the first moving object to the first position satisfies a third condition B; or, the density of moving objects within the first area satisfies a third condition C; or the moving speed of the first moving object meets a third condition D, or the relation between the position record of the moving object and the time meets a third condition E; or, the trajectory of the first moving object meets the third condition F.
Optionally, the method further includes receiving a first time sent by the client; according to the first time, acquiring positioning information corresponding to a third moving object at the first time; generating position prompt information of the third moving object according to the positioning information; and returning the position prompt information to the client.
Optionally, generating location prompt information of the third mobile object according to the positioning information, including generating or using a thumbnail as the location prompt information, where the thumbnail indicates a location of the third mobile object at the first time; or generating a character as the position prompt information according to the positioning information, wherein the character indicates the position of the third moving object at the first time.
In still another aspect, the present application provides a client device, which includes a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, and the processor implements the information display method described above when executing the program.
In still another aspect, the present application provides a server device, which includes a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, where the processor executes the computer program to implement the above-mentioned interactive information generating method.
In another aspect, the present application provides a positioning system, which includes a positioning tag, a micro base station, a server, and a monitoring client, where the server is the above server device, and the monitoring client is the above client device.
By means of the technical scheme, the information display method, the information display equipment and the positioning system are provided. Compared with the prior art, the method and the device have the advantages that the time identification information can be displayed on the interactive interface of the video, and can be used for indicating the time point or the time period of the position record of the mobile object meeting the first condition, so that the user can timely know the time point or the time period of the position record of the mobile object meeting the first condition, and can also quickly position the specific position needing to be checked according to the time point or the time period to shoot the video, the monitoring efficiency and the monitoring accuracy are improved, and the watching experience of the user is improved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical solutions of the present application more clearly understood, and the following detailed description of the present application is provided in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application in a non-limiting sense. In the drawings:
FIG. 1a is a schematic diagram of a positioning system provided by an embodiment of the present application;
FIG. 1b is a schematic structural diagram of a positioning system provided in an embodiment of the present application;
FIG. 1c shows a software video playback schematic of a positioning system;
FIG. 2a is a schematic diagram illustrating an example of a video interactive interface provided by an embodiment of the present application;
FIG. 2b is a schematic diagram illustrating another example video interactive interface provided by the embodiment of the present application;
FIG. 2c is a schematic diagram illustrating an example of another video interactive interface provided by an embodiment of the present application;
FIG. 2d is a schematic diagram illustrating an example of another video interactive interface provided by an embodiment of the present application;
FIG. 3a is a schematic diagram of an application example based on a video interactive interface according to an embodiment of the present application;
FIG. 3b is a schematic diagram illustrating another example of an application provided by an embodiment of the present application;
FIG. 3c is a schematic diagram illustrating another example application provided by an embodiment of the present application;
FIG. 3d is a schematic diagram illustrating another example of an application provided by an embodiment of the present application;
FIG. 3e is a schematic diagram illustrating another application example provided by an embodiment of the present application;
FIG. 4 is a diagram illustrating an example of a conditional configuration area provided by an embodiment of the present application;
fig. 5 shows a schematic diagram of an example of an application scenario provided in an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
For example, the positioning system shown in fig. 1a includes a positioning micro tag, a micro base station, an access device, a solution server, a monitoring client, etc., and the micro tag is worn by a mobile object (person, cargo, etc.), so that the position of the mobile object can be located by the positioning system. A camera is usually installed in the monitored area (the position of the camera is not shown in the figure).
As shown in fig. 1b, which is a schematic diagram of a system structure of the positioning system shown in fig. 1a, the micro tag in the positioning system periodically broadcasts a positioning signal, the base station transmits the positioning signal to the resolver server after receiving the positioning signal, and the resolver server calculates the position of the micro tag according to the positioning signal. In other positioning schemes, the micro tag may be triggered to send the positioning signal by a button, or the micro tag may be triggered to send the positioning signal by a server.
The positioning system shown in fig. 1a further includes video capture devices and a storage server, the capture devices are fixedly installed, a plurality of cameras work for 24 hours at the same time in a monitoring scene, and the captured video is massive. The position or shooting range of the camera in the positioning system is recorded in the database, for example, in some embodiments, the coordinate range shot by each camera is determined by a manual calibration method, and the shooting range of the camera is recorded in the database.
Fig. 1c shows the software interface of the monitoring end of the positioning system. At present, if a monitoring person needs to search a video of a specific person, the video is usually traced back manually, or a corresponding video is searched by using an image recognition method, however, due to the reasons of long video recording time, face shielding of the specific person and the like, the method for searching the video is high in cost and low in efficiency.
The moving object is an object carrying a micro tag in a positioning system, and a sequence of a moving object coordinate and time, namely a track record of the moving object, can be also called as a position record, is recorded; in different embodiments, the coordinates may be 0-dimensional, 1-dimensional, 2-dimensional, or 3-dimensional coordinates.
The embodiment provides an information display method, which may also be referred to as a video playing method, and the method includes:
101. and generating first time identification information according to the position record and the video recording position of the mobile object.
102. And displaying the first time identification on the interactive interface.
In some application scenarios of the positioning system of this embodiment, for example, in a monitoring scenario, the user applet (first mobile object) and the user orange carry the positioning micro tag, and the server periodically records the coordinate positions of the user applet and orange to form a track record. During monitoring, the user applet and orange connectors run away, and the administrator needs to trace the video when the event occurs. FIG. 2B shows a video interactive interface, wherein the area 201 is a video frame display area, and the reference line 211 has a progressive bar function, for example, the video played in the video player is composed of 2020.01.02, video streamA1 of room A during 00:00-01:00, video streamB1 of room B during 00:00-01:00, video streamA2 of room A during 01:00-02:00, and video streamB2 of room B during 01:00-02:00, which are sequentially spliced by 4 segments of video. The database stores track records of user applets and user oranges. And calculating track records of the user applet and the user orange to obtain the time period that the distance between the user applet and the user orange is less than 50cm and the moving speed is more than 2m/s and is 00:10-01:00, and inquiring the track records of the user A and the user B within the time period of 00:10-01:00 according to the time period of 00:10-01: 00. According to the track records of the user applet and the user orange obtained by query and the coordinate range shot by each camera (namely, the position of the recorded video), judging that the user applet and the user orange are in the room A within the time period of 00:10-01:00, and further generating time identification information according to the time identification information, wherein the time identification information indicates that the time is marked at the position corresponding to the lower part of the progress bar corresponding to the video StreamA1 for 00:10-01:00, and the time is marked at the position corresponding to the lower part of the progress bar of the video StreamA1 by the client according to the time identification information (namely, displaying the first time identification 212a) so that a manager in the monitoring room can quickly locate the video through the video interaction interface shown in FIG. 2 a. In the above-described embodiment, the first condition is that the first moving object (user applet) is less than 50cm away from the user orange and the moving speed is more than 2 m/s. In fig. 2b, the number of the interactive interface controls is reduced by displaying the first time identifier on the progress bar, so that the interactive function and the convenient interactive experience are maintained while the operation interface is simplified.
It can be understood that besides the way of displaying the time mark 212a in fig. 2b, there are also many ways to display the time mark 212a, for example, as shown in fig. 2a, the progress bar 202 is displayed in the interactive interface, the reference line 211 has a mapping relation with the progress bar, and the user can roughly know the time indicated by the time mark through the reference line. 20, 206 in fig. 2a indicate the start time and the end time of the time axis of the progress bar, 209 indicates the progress of the video playback.
Compared with the prior art, the information display method provided by the embodiment can display the time mark on the interactive interface of the video, and the time mark can be used for indicating the time point when the position record of the mobile object meets the first condition or the time period when the position record of the mobile object meets the first condition, so that a user can timely know the time point or the time period when the position record of the mobile object meets the first condition, and can quickly position the specific video content to be viewed according to the time point or the time period, thereby improving the monitoring efficiency and the accuracy and improving the viewing experience of the user.
In step 101, the position record of the moving object may refer to a position record of the moving object, and the position record may also be referred to as a track record. For example, in one implementation, it may be to record and record a video position according to a position of the first moving object, and generate first time identification information, specifically, determine a time a (time point or time period) at which the first moving object satisfies the first condition according to the position record of the first moving object;
in some embodiments, if the interactive interface is a video playing interface for playing a video of the first position, a time B (time period) of the first moving object near the video recording position and a coincidence time C of the time B and the time a are determined. And generating first time identification information according to the time C, receiving the first time identification information by the client, and displaying the first time identification on the progress bar at a position corresponding to the time C.
In some embodiments, the interactive interface is a video playback interface that plays a video. And respectively recording videos by n video recording machines, judging the time of the first moving object near the machine position one by one to obtain time B and judging the coincidence time C of the time B and the time A one by one for each machine position, and generating first identification information according to the corresponding relation between the time C and the machine position. In one embodiment, as shown in fig. 2C, a plurality of reference lines (211, 221, 231) are displayed in the interactive interface, each reference line corresponds to a machine position, and the position of the time mark on each reference line corresponds to a corresponding time C. And clicking the reference line or the time identification by the user, and correspondingly playing the video content corresponding to the reference line or the time identification in the video playing interface.
In other embodiments, the interactive interface is a multi-window video playing interface, and each window can play a video of a machine position. And respectively recording videos by n video recording machines, judging the time of the first moving object near the machine position one by one to obtain time B and judging the coincidence time C of the time B and the time A one by one for each machine position, and generating first identification information according to the corresponding relation between the time C and the machine position. In one embodiment, as shown in FIG. 3a as a reference line 205 in the interactive interface, the reference line 205 has a progress bar function. And mapping the time identifier corresponding to each machine position to the corresponding position of the reference line 205 according to the time C for each machine position. And clicking the time identification by the user, and correspondingly playing the video content of the corresponding machine position at the time point or in the time period corresponding to the time identification in the video playing window. In some embodiments, the video playing window plays a video of a first machine position or an unplayed video, the first time identifier a corresponds to a second machine position, and after the user clicks the first time identifier a, the video playing window plays a video corresponding to the time or the time period indicated by the second machine position and the first time identifier a; in another embodiment, in the display interface, the time identifier positions of two machine positions are overlapped, and if the first time identifier and the second time identifier position are overlapped, a third time identifier is displayed at the overlapped position; the third time identifier can trigger the video contents respectively corresponding to the first time identifier and the second time identifier to be played in 2 video playing windows (201a, 201b) of the interactive interface.
In other implementations, the position record of the mobile object may also refer to position records of a plurality of mobile objects. For example, in one embodiment, room a is provided with a seat, a video of room a is played in the interactive interface, and the time B (time period) for which the density of moving objects in room a exceeds a threshold is determined. And generating first time identification information according to the time B, receiving the first time identification information by the client, and displaying the first time identification on the progress bar at a position corresponding to the time B.
The first time identification information may be a picture, a color block, a character symbol, or the like, and is used to indicate a time point when a position record of the mobile object (for example, a position of a positioning tag corresponding to each time point where the mobile object is recorded) meets a first condition or a time period when the position record meets the first condition. The displayed first time identifier corresponds to the first time identifier information. For this embodiment, the time identification information may be obtained through the server, and the time identification information may be generated by the server and then sent to the client for display.
Compared with the prior art, the method and the device have the advantages that the starting time, the ending time or the duration time period of the event concerned by the administrator is calculated according to the track of the moving object, the time period is identified by the progress bar in an auxiliary mode, so that the administrator is helped to quickly locate the video where the event concerned possibly occurs, and the time spent by the administrator for searching the video is reduced. The first condition may be different according to actual application scenarios.
In some embodiments, as shown in fig. 2b, the time markers (212a, 213a, 212b, 213b) are displayed on the progress bar 205, so that more display space can be provided for other controls during video playing.
Further, as a refinement and an extension of the specific implementation of the above embodiment, the first condition includes one or a combination of several of the following ways:
the distance between the first moving object and the second moving object meets a third condition A; the relationship of the first moving object to the first position satisfies a third condition B; or the like, or, alternatively,
the density of the positioning labels in the first area meets a third condition C; or the like, or, alternatively,
the moving speed of the first moving object satisfies a third condition D;
the relation between the position record of the moving object and the time meets a third condition E.
The following alternatives are given as examples:
in different application scenarios, the corresponding third condition a may be different, and satisfying the third condition a may mean that the distance between the first moving object and the second moving object is less than or equal to or greater than the first distance; or satisfying the third condition a may mean that the distance between the first moving object and the second moving object is greater than or equal to the second distance; in determining the distance between the two objects, in some embodiments, the determination is made in conjunction with a map, for example, if there is a wall partition, the distance between the first moving object and the second moving object is the shortest distance that the first moving object moves to the location of the second moving object.
Satisfying the third condition B may mean that the distance of the first moving object to the first position is equal to or less than or equal to a second third distance; or, satisfying the third condition B may mean that the distance from the first moving object to the first position is greater than or equal to a fourth distance; or, satisfying the third condition B may refer to that the moving object is within the target area; the first location may refer to a point, an enclosed area, or a boundary (e.g., a fence).
Satisfying the third condition C may mean that the density of the positioning tags in the first area is greater than or equal to the first density; or satisfying the third condition C may mean that the density of the positioning tags in the first area is less than or equal to the first density;
satisfying the third condition D may mean that the moving speed of the first moving object is greater than 2m/s, or satisfying the third condition D may mean that the moving speed of the first moving object is 2m/s or less.
Satisfying the third condition E may mean that the distance between the first moving object and the camera is less than the threshold in the time range of 2:00-3: 00. For example, in the power plant inspection process, related videos in a factory are spliced together according to the activity track of a user to obtain a video of an applet in one day, the user applet is dispatched between 2:00 and 3:00 to execute a power plant inspection task, and an area between 2:00 and 3:00 is marked in a progress bar through a first time mark, so that an administrator can conveniently check the execution condition of the inspection task of the user applet.
The third condition F may be that the trajectory graph features meet the set feature conditions, for example, in some embodiments, the third condition F may be that the trajectory of the user applet meets the features of the outer contour of the power plant building, and the first time identifier indicates that the user applet inspects the video content at the periphery of the power plant. Instead of the trace pattern features, those skilled in the art will appreciate that the feature patterns that can be recognized or detected in the pattern imaging can be used as the third condition F.
It is understood that all possible third conditions a-F are not exhaustive herein and that other technical solutions are included in conditions a-F, as will be apparent to those skilled in the art.
In other embodiments of the present disclosure, at least 2 types of time identifiers are displayed in the interactive interface of the video, that is, a first time identifier and a second time identifier are displayed on the interactive interface, where the second time identifier is used to indicate a time point when the position record of the second mobile object meets the second condition or a time period when the position record meets the second condition; the second time stamp is distinct from the first time stamp.
The first mobile object and the second mobile object may be the same entity or may refer to different entities, for example, in an application scenario, as shown in fig. 2a, a first time identifier (212a, 212B) and a second time identifier (213a, 213B) are displayed in the interactive interface of the video, the first time identifier indicates a time period of the user applet in room a, and the second time identifier indicates a time period of the user applet in room B. For another example, in another application scenario, a first time indicator (212a, 212b) and a second time indicator (213a, 213b) are displayed in the interactive interface of the video, the first time indicator indicating the time period of the user applet in the room a, and the second time indicator indicating the time period of the user orange in the room a. For another example, a first time indicator (212a, 212b) and a second time indicator (213a, 213b) are displayed in the interactive interface of the video, the first time indicator indicates the time period of the user applet in the room a, and the second time indicator indicates the time period of the user orange which is less than 50cm away from the user applet and has the moving speed lower than 1 m/s.
The first time identifier and the second time identifier are displayed on the interactive interface, and may be displayed in the same identifier area as in fig. 2b, or may be respectively identified in different identifier areas as in fig. 2c, namely, the first time identifier (212a, 212b) and the second time identifier (213a, 213 b). In some embodiments, the identifier display area is generated correspondingly according to the class number of the time identifier, for example, as shown in fig. 2c, there are 3 classes of time identifiers, which are respectively a first time identifier (212a, 212b), a second time identifier (213a, 213b) and a third time identifier (214a), and the identifiers are correspondingly displayed in different display areas (211, 221, 231). In some embodiments, the display area and the progress bar interact in the same way, and the user interacts with the display area to play the corresponding video content.
In other embodiments, the method further comprises obtaining a first time through the interactive interface; obtaining position prompt information of a third moving object according to the first time; displaying a position prompt on the interactive interface; the location hint indicates a location at which a third moving object is located at the first time.
In an optional embodiment, as shown in fig. 3a, a video playing area 331 and a playing progress bar 301 are displayed on an interactive interface, and obtaining a first time through interaction between a mouse 321 and the playing progress bar 301 may specifically include obtaining a position indicated by a mouse pointer when the playing progress bar is dragged, where the indicated position is the first time; or, obtaining a position indicated when the mouse pointer floats on the playing progress bar, wherein the indicated position is the first time.
For example, the mouse is suspended above the playing progress bar, the actual time indicated by the position of the mouse on the playing progress bar is obtained according to the corresponding relation between the progress bar and the actual time of the video, for example, playing a video in a video player is formed by sequentially splicing the following videos: 2020.01.02, video streamA1 for room A during 00:00-01:00, video streamB1 for room B during 00:00-01:00, video streamA2 for room A during 01:00-02:00, video streamB2 for room B during 01:00-02:00, the mouse shown in figure 3a indicates a location 321 at streamA 202: 10, i.e., 2020.01.02, 01:10, based on the indicated first time, query the database for the trace records of the user applets, and generating position prompt information according to the inquired position, and sending the position prompt information to the client, so that the interactive interface displays the position prompt 311. The location hint information may be in jason, xml, picture, or other format. It is understood that in the above embodiment, there is a mapping relationship between the position 321 indicated in the interface by the mouse and the corresponding time 2020.01.0201:10, and whether the position 321 or the time 2020.01.0201:10 is referred to as the first time, those skilled in the art know how to obtain the corresponding trajectory data to generate the position hint information. It is to be understood, therefore, that this application is not limited thereto and that any modifications contemplated by those skilled in the art are intended to be within the scope of this application. Optionally, displaying a position prompt message on an interactive interface, which may specifically include displaying a thumbnail on the interactive interface, where the thumbnail indicates a position of the third moving object at the first time; or displaying characters on the interaction interface, wherein the characters indicate the position of the mobile object at the first time.
The specific content of the location hint is not shown in the location hint 311 of fig. 3a, and the presentation of the location hint includes, but is not limited to, the display in the manner shown in fig. 3b-3 d. The location cues of figures 3b-3d are generated from location cue information. Fig. 3b-3d are schematic diagrams of the position prompt 311 in fig. 3 a. For example, according to 2020.01.02, 01:10, the trace record of the user applet in the database is queried to obtain the coordinate position L1 of the user applet at 2020.01.02, 01:10 (L1 is located in the room A), and the corresponding position prompt contents in FIGS. 3b-3c may be that a full map or a partial map is displayed in FIG. 3b, and a mark point 311a represents the position of the user applet at 2020.01.021: 10. Room a is shown in fig. 3c in a highlighted manner for indicating to the user that the applet is located in room a. "Room A" is displayed in FIG. 3d in text indicating that the user applet is located in room A.
In other embodiments, as shown in fig. 4, a condition configuration area is further included in the display interface, and may be configured to configure the first condition or select the third moving object through the condition configuration interface. It is understood that the user can configure the interface of the operation mode with different styles according to different application scenes and different users. And the application requirements of the user in different application scenes are met through the condition configuration area.
In other application scenarios of the present invention, the first mobile object is a user applet carrying a micro tag, a track record of the user applet and a video Stream1 of the room a are stored in the database, the track record of the user applet is inquired, a time period of the user applet in the room a is obtained, and the time period is marked on the progress bar of the video Stream 1. In the above-described embodiment, the first condition is that the first moving object (user applet) is located in the room a.
The method executed by the client comprises the following steps:
201. the client displays the first time identifier on the interactive interface of the first video.
The interactive interface can comprise a video playing area, a progress display area and the like. The user can control the playing of the video by clicking the playing progress bar or the playing control button and the like. The video played in the display interface may be a video obtained by splicing videos shot by each monitoring point according to a moving track of a moving object (such as a person or a moving object). For example, the user of 1:00-2:00A is located in the area a, and the user of 2:00-3:00A is located in the area b, and the video 1 of the area a 1:00-2:00 is spliced in the area b of 2:00-3:00 to obtain the video played in the display interface in the embodiment.
The first time identifier may be used to indicate a time point when the position record of the mobile object (which may record the position of the location tag corresponding to the mobile object at each time point) meets the first condition or a time period when the first condition is met. For this embodiment, the server may generate the time identification information, where the time identification information may be in xml, json, or a custom format and is transmitted to the client, and the client parses the time identification information to generate the time identification and display the time identification. In other embodiments, the picture may be generated by the server.
For example, the time identification information is displayed on the interactive interface, so that the user can quickly obtain the time information of the mobile object, which has some events (such as entering or leaving a specific area, meeting or being in the same line with other mobile objects, and the like) through the visual time identification, and further quickly locate the video content to be viewed according to the time identification, thereby meeting the viewing requirements of the user and improving the monitoring and viewing efficiency of the mobile object.
Compared with the prior art, the information display method provided by the embodiment can display the time mark on the interactive interface of the video, and the time mark can be used for indicating the time point when the position record of the mobile object meets the first condition or the time period when the position record of the mobile object meets the first condition, so that a user can timely know the time point or the time period when the position record of the mobile object meets the first condition, and can quickly position the specific shot video to be checked according to the displayed time mark in the user interface, thereby improving the efficiency and the accuracy of checking and monitoring.
The first condition may be different according to actual application scenarios. The following embodiments (a) to (d) are given by way of example:
(a) the first condition may refer to that the first moving object is at a distance of equal to or less than or equal to the first distance from the second moving object. For example, two moving objects are contained in the first video, for the position records of the two moving objects, the server obtains a time point or a time period when the distance between the two moving objects is smaller than or equal to a certain distance (such as 50cm and the like), then generates corresponding time identification information according to the time point or the time period, and sends the corresponding time identification information to the client, so that the client renders a time identification on a playing progress bar, and thus, a user can conveniently and quickly locate video contents meeting or in the same line of the two moving objects. Correspondingly, the server can also obtain a time point or a time period when the distance between the two moving objects is greater than or equal to a certain distance, then generate corresponding time identification information according to the time point or the time period and send the corresponding time identification information to the client, so that the client renders the time identification on the playing progress bar.
(b) The first condition may refer to that the distance of the first moving object to the first position is equal to or less than or equal to the second distance. For example, the first position may be the position of a particular reference object, or may be a fence. For example, the server side records the position of the moving object and the position of the fence, if the moving object crosses the fence at the moment, the server side generates corresponding time identification information according to the time point or the time period of the event and sends the corresponding time identification information to the client side for rendering the corresponding time identification on the playing progress bar, and then a subsequent user can conveniently and quickly position the video content of the moving object crossing the fence according to the time identification. In other embodiments, if the mobile object has entered the fence at this time, the server generates corresponding time identifier information according to the time point or time period of the event and sends the time identifier information to the client, and the user client renders the corresponding time identifier on the play progress bar, so that a subsequent user can conveniently and quickly locate the video content of the mobile object entering the fence according to the time identifier.
(c) The first condition may refer to a density of positioning tags within the first area being greater than or equal to a first density; for example, the server calculates the density of the tags in the positioning area, if the density of the positioning tags in a certain area at a certain time or a certain time period is greater than or equal to a certain density threshold, the server generates corresponding time identification information according to the time point or the time period of the event and sends the time identification information to the client, and the client displays the corresponding time identification on the playing progress bar according to the time identification information.
(d) The first condition may refer to that the moving object is within the target area. For example, different areas are divided in the map, the server generates corresponding time identification information according to time points or time periods of the mobile object in the different areas and sends the corresponding time identification information to the client, and the client renders the corresponding time identification on the playing progress bar according to the time identification information. In some embodiments, the regions corresponding to the time marks are different, and the styles of the time marks are not used, so that a user can distinguish the corresponding relations between the video clips on the progress bar and the different shooting regions respectively through the time marks, and then the user can quickly position the video content of the moving object entering each region through the time marks. As shown in fig. 5, different time marks may be distinguished by using different colors and diagonal lines on the play progress bar, different segment marks correspond to different areas of the mobile object, and time mark information is displayed by the segment marks, so as to display the time of the mobile object king entering and exiting from each department monitoring area.
Further optionally, displaying a second time identifier on the interactive interface, where the second time identifier is used to indicate a time point when the position record of the mobile object meets the second condition or a time period meeting the second condition; the second time stamp is distinct from the first time stamp. In the step (d), the time identifier corresponding to the first target area may be different from the time identifier corresponding to the second target area, for example, the colors, the shapes of the lines, and the like of different time identifiers are different, so as to better distinguish the video segments shot by the mobile object in different areas, thereby facilitating the user to quickly locate; and/or, if the time identifiers corresponding to the plurality of moving objects are displayed, the time identifiers corresponding to the first moving object and the second moving object are distinguished from each other, for example, distinguished by different colors, different fonts and the like, so as to better distinguish the time identifiers corresponding to the different moving objects; and/or, the time identifiers corresponding to the first condition a and the first condition B are different, for example, if a plurality of pieces of time identifier information about the moving object exist at the same time, where the time identifier 1 is used for indicating a time point or a time period when the position record of the moving object conforms to the first condition a, and the time identifier 2 is used for indicating a time point or a time period when the position record of the moving object conforms to the first condition B, when the time identifier 1 and the time identifier 2 are displayed, different colors, fonts, lines and the like can be used for distinguishing, so that the user can better distinguish the time identifiers with different meanings.
In some embodiments, the information display method may further include acquiring a time mark or a hover over which the mouse pointer slides and above the time mark, where a first condition corresponding to the time mark is a first condition a, and displaying information corresponding to the first condition a. For example, the time identifier corresponds to a time point or a time period when the user enters the fence a, and when the user slides or suspends the time identifier through a mouse, specific monitoring event types (such as an alarm event type, an event stage prompt type, and the like) corresponding to the user entering the fence a, brief description contents of the event, and the like can be displayed, so that the user can conveniently and quickly know the event contents corresponding to the time identifier information, and further, the video content required to be viewed can be quickly positioned, and the monitoring viewing efficiency can be improved.
Further optionally, if a plurality of moving objects are provided, for example, at least two moving objects are provided, second time identification information is displayed on the interactive interface of the first video, where the second event identification information is used to indicate a time point when the position record of the second moving object meets the second condition or a time period when the position record of the second moving object meets the second condition; the second time stamp is distinct from the first time stamp. For example, step 201 may specifically include displaying, on the interactive interface, different time identifiers corresponding to the respective mobile objects, where the different time identifiers are different from each other in terms of style or font, or displaying, on the interactive interface, a plurality of specified mobile objects or a time identifier of one specified mobile object according to a specified mobile object to be viewed (e.g., specified by a user, random by a system, or a default specification).
Further, in order to facilitate the user to know the position of the moving object at the specific time in other ways, optionally, the method of the embodiment may further include obtaining the indicated first time through the interactive interface; obtaining position prompt information of a third moving object according to the first time; and displaying the position prompt information on the interactive interface.
The location hint information indicates a location at which a third moving object is located at the first time; the position prompt information is used for generating position prompts. In an optional mode, according to the UI interactive operation, first time indicated by a user on a progress bar in an interactive interface is obtained, and according to the first time, a server queries a position A of a moving object corresponding to the first time. The first time indicated by the user on the progress bar may be a time T1 corresponding to a position of the mouse when the mouse is hovering over the progress bar. The position cue information is generated from the position a of the moving object at T1. And the client receives and obtains the position prompt information and is used for generating a corresponding position prompt. In some embodiments, the location hint information is coordinate values of a map, the location hint is a picture (as shown in fig. 3b-3c, 3 e), and the generating step of the location hint includes caching the picture of the corresponding map at the client, calculating the location of the coordinate value mapped on the map picture according to the coordinate values after the client receives the location hint information, and rendering a mark representing the location of the mobile object at the location corresponding to the picture of the map, so as to obtain the location hint (picture). It is to be understood that the generating step of the location hint described above may also be performed in a server. In other embodiments, as shown in fig. 3e, in addition to the above-mentioned marks representing the positions of the moving objects, the moving tracks (dotted lines in the figure) of the moving objects (five-pointed star in the figure) are rendered in the position prompt (picture).
Through the optional mode, the first time indicated by the operation instruction can be obtained through the interactive interface of the video, the position prompt information of the moving object is generated according to the position of the moving object corresponding to the first time, and the position prompt of the moving object corresponding to the first time is further generated and displayed according to the position prompt information. Through the position suggestion that shows in the display interface, convenience of customers in time knows the position that the removal object is located at specific time to can convenience of customers fix a position the specific video content that expects to look over fast, the method of this case has improved monitoring efficiency and accuracy, has promoted user's the experience of watching.
Optionally, the first video is played in the interactive interface, and the first video is a composite video, and is formed by sequentially splicing 4 pieces of video, for example, the 4 pieces of video are 2020.01.02, the video streamA1 of the room A during the period from 00:00 to 01:00, the video streamB1 of the room B during the period from 00:00 to 01:00, the video streammA 2 of the room A during the period from 02:00 to 03:00, and the video streamB2 of the room B during the period from 02:00 to 03: 00. Obtaining the indicated first time through the interactive interface of the first video may specifically include obtaining a position T1 indicated when a mouse pointer slides on the interactive interface. And the client or the server calculates the actual time T2 corresponding to the position T1 indicated by the mouse according to the mapping relation between the actual time and the synthesized video time axis. And generating position prompt information according to the position of the mobile object corresponding to the actual time T2, and generating and displaying a position prompt corresponding to the first time after the client receives the position prompt information.
For example, the position of the mouse pointer on the interactive interface is obtained, which may be the position indicated when the mouse pointer drags the play progress bar on the interactive interface; or may be the position indicated when the mouse pointer hovers over the play progress bar of the interactive interface.
Besides the manner of obtaining the indicated first time, there may be a method of obtaining the indicated first time on the interactive interface according to mouse click, touch (touch screen) and other operations.
As a further optional mode, the displaying of the position prompt information on the interactive interface may specifically include displaying a text or a thumbnail on the interactive interface, where the thumbnail or the text may be as shown in fig. 3b-3 e. By the aid of the method for prompting the position information of the mobile object in the text or thumbnail mode, a user can more intuitively and quickly obtain the position information of the mobile object at the first time.
Further, as an optional mode, displaying the position prompt information on the interactive interface may specifically include displaying a prompt box corresponding to the position of the mouse pointer on the play progress bar of the interactive interface, and displaying the text or the thumbnail shown in fig. 3b-3e in the prompt box.
If the condition of which moving object is not specified to be checked is not specified, the position information of each moving object can be displayed on the interactive interface, for example, the current moving track and the position of each moving object in a prompt box are displayed, and the tracks and the position points of different moving objects are distinguished by adopting different colors, linear shapes and the like; the position prompt information can also be used for specifying the position of only one or more moving objects (such as the position specified by a user, random or default specification of the system and the like), and the position prompt information can be displayed on the interactive interface, wherein the position prompt is generated according to the position prompt information of one or more moving objects specified in the plurality of moving objects.
The content of the foregoing embodiment is a process of playing a video, which is described at a client side, and further, to fully illustrate an implementation manner of this embodiment, this embodiment further provides an interactive information generating method, which is applicable to a server side, where this method may mainly correspond to a process of obtaining time identification information in the foregoing method, and the method includes:
301. and the server records a time point meeting the first condition or a time period meeting the first condition according to the position of the mobile object, and generates first time identification information.
Specifically, time identification information is generated according to the first condition, the time point meeting the first condition or the time period meeting the first condition recorded in the position of the mobile object.
In the embodiment, the positions of the positioning tags corresponding to the moving object at different times are recorded in advance. For example, the mobile object carries a positioning tag, the positioning tag communicates with a positioning base station, the positioning tag sends position data to the positioning base station at regular time, the positioning base station transmits the positioning data to a calculation server, calculates the position of the positioning tag, and then makes corresponding mapping records, that is, the positions of the positioning tags corresponding to the mobile object at different times respectively, to obtain the position record of the mobile object.
Specifically, the position record of the mobile object may be obtained first to be analyzed, and a time point or a time period at which the position record meets the first condition is found in combination with the content of the first condition corresponding to the service requirement, so as to generate corresponding time identification information.
302. And sending the generated first time identification information to the client.
Furthermore, the client displays the time identification information on the interactive interface, and a subsequent user can quickly know the time point or the time period of the key information content in the video through the time identification information, so that the video content needing to be viewed can be quickly positioned according to the time point or the time period, the viewing requirement of the user is met, and the monitoring viewing efficiency of the mobile object is improved.
By the video generation method applicable to the server side, a user can timely know the time point or the time period when the position record of the mobile object meets the first condition, and can quickly position the specific position to be checked according to the time point or the time period to shoot the video, so that the monitoring efficiency and accuracy are improved, and the watching experience of the user is improved.
Further, as a refinement and an extension of the specific implementation of the above embodiment, the first condition includes one or a combination of a first moving object and a second moving object, wherein the distance between the first moving object and the second moving object satisfies a third condition A; or, the relation of the first moving object to the first position satisfies a third condition B; or, the density of the moving objects in the first area satisfies a third condition C; or the moving speed of the first moving object meets a third condition D; or the relation between the position record of the moving object and the time meets a third condition E.
For example, the first condition may refer to that the distance between the mobile object and the second mobile object is less than or equal to or greater than or equal to the first distance; or, the first condition means that the distance from the moving object to the first position is less than or equal to or greater than or equal to the second distance; or, the first condition means that the density of the positioning labels in the first area is greater than or equal to the first density; or, the first condition refers to that the moving object is within the target area.
In addition to sending the time identification information to the client, if the user triggers to play the video content of the time point or the time period corresponding to the time identification information according to the displayed time identification information, the client obtains a request according to the video sent by the time point or the time period; correspondingly, the server receives a video acquisition request sent by the client according to the time point or the time period, and first positioning information corresponding to the mobile object at the time point or the time period is acquired; then intercepting video data of the time point or the time period from a video stream shot in a shooting area corresponding to the first positioning information; and returning the intercepted video data to the client.
In this embodiment, the cameras are distributed in different areas, the area ranges shot by the cameras are recorded in the database, and when receiving a video acquisition request sent by the client according to the time point or the time period, the subsequent server can find a video stream shot by the camera corresponding to the shooting area according to the first positioning information (corresponding to the time point or the time period) of the mobile object and by combining the shooting area of the camera recorded in the database, and then intercept the video data corresponding to the time point or the time period from the video stream, that is, the video data corresponding to the time point or the time period that the user needs to watch is the video data, so as to meet the viewing requirement of the user.
Further, if there are a plurality of moving objects, i.e. at least two moving objects, in order to satisfy the requirement of viewing the time identification information of a certain target moving object, optionally, one moving object may be selected from the moving objects as the target moving object according to a user or system instruction, and then the identification of the target moving object may be sent through the client. Correspondingly, before step 201, the method may further include receiving an identification of a target mobile object sent by the client, where the target mobile object is a mobile object determined from a plurality of mobile objects; correspondingly, step 201 may specifically include obtaining a location record of the target mobile object according to the identifier of the target mobile object (i.e., one of the mobile objects mentioned in step 202); and recording a time point meeting the first condition or a time period meeting the first condition according to the position of the target moving object, and generating time identification information. By the method, the single target can be accurately tracked and checked under the condition that a plurality of moving objects exist, so that certain business requirements can be met.
Further, in order to satisfy more query requirements, the embodiment may further include receiving a first time sent by the client; according to the first time, obtaining positioning information corresponding to a third moving object at the first time; generating position prompt information of the third moving object according to the positioning information; and returning the position prompt information to the client.
The received first time is the indicated first time obtained by the client through the interactive interface of the first video. When the server receives the first time, according to the position record of the mobile object, the position of the positioning tag corresponding to the mobile object at the first time can be obtained and used as the positioning information corresponding to the mobile object at the first time. And then generating position information according to the position record of the mobile object, the positioning information corresponding to the first time and map information and the like, and returning the position information to the client, wherein the generated position information can comprise the map position, the moving track and/or the like of the mobile object at the first time. The client may display the location information on an interactive interface of the first video.
Through this kind of mode, compare with current prior art, this embodiment can be according to instructing the instruction, at the mobile object's that the interactive interface display corresponds with instruction time positional information, and convenience of customers in time knows the position that mobile object is located at specific time to can fix a position the specific position that needs to look over fast and shoot the video, improve monitoring efficiency and accuracy, promote user's the experience of watching.
Further, in order to explain the generation process of the position information of the mobile object, optionally, generating the position prompt information of the third mobile object according to the positioning information, specifically, generating or using a thumbnail as the position prompt information according to the positioning information, wherein the thumbnail indicates the position of the third mobile object at the first time; or generating a character as the position prompt information according to the positioning information, wherein the character indicates the position of the third moving object at the first time.
For example, according to the position record of the mobile object, historical positioning tag positions corresponding to various historical time points of the mobile object before the first time are obtained, and according to the positioning information corresponding to the first time, the current positioning tag position of the mobile object is determined. In the map information, the moving track of the moving object, the current map position and the like are drawn according to the historical positioning label position and the current positioning label position, and then the type of the thumbnail is generated and obtained. By displaying the position information of the mobile object in the mode, the user can more intuitively know the position information of the mobile object at the first time, and the data display precision is improved.
Further, as a specific implementation of the information display method, the embodiment provides an information display device applicable to a client side, and the device includes a display module 41.
The display module 41 may be configured to display a first time identifier on the interactive interface, where the first time identifier is used to indicate a time point when the position record of the mobile object meets the first condition or a time period when the position record meets the first condition.
In a specific application scenario, optionally, the first condition includes one or a combination of a first moving object and a second moving object, where a distance between the first moving object and the second moving object satisfies a third condition a; or, the relation of the first moving object to the first position satisfies a third condition B; or, the density of the positioning labels in the first area meets a third condition C; or the moving speed of the first moving object meets a third condition D, or the relation between the position record of the moving object and the time meets a third condition E.
In a specific application scenario, the display module 41 may be further configured to display a second time identifier on the interactive interface, where the second time identifier is used to indicate a time point when the position record of the mobile object meets the second condition or a time period when the position record meets the second condition; the second time stamp is distinct from the first time stamp.
In a specific application scenario, the display module 41 may be further configured to obtain a first time through the interactive interface; obtaining position prompt information of a third moving object according to the first time; displaying the position prompt information on the interactive interface; the position hint information indicates a position at which a third moving object is located at the first time; the position prompt information is used for generating position prompts.
In a specific application scenario, the display module 41 may be specifically configured to obtain a position indicated by a mouse pointer when the play progress bar is dragged, where the indicated position is the first time; or, obtaining a position indicated when the mouse pointer floats on the playing progress bar, wherein the indicated time is the first time.
In a specific application scenario, the display module 41 may be further configured to display a thumbnail on the interactive interface, where the thumbnail indicates a position of the third mobile object at the first time; or displaying characters on the interactive interface, wherein the characters indicate the position of the mobile object at the first time.
In a specific application scenario, the display module 41 may be further configured to display the first time identifier information and/or the second time identifier on a play-in degree bar of the interactive interface; the first condition means that the first moving object is located in a first area; the second condition means that the second moving object is located in the second area. Displaying a condition configuration interface for configuring the first condition, the second condition or selecting the third moving object through the condition configuration interface;
the interactive interface is an interactive interface of a first video; the first time identification information corresponds to a first video; the first condition is that a first moving object is located within a first region; when the mouse is suspended on the first time mark, the interactive interface displays the prompt message corresponding to the first condition; when the mouse clicks the progress bar, the playing position of the first video jumps to the time corresponding to the mouse clicking position;
and clicking the first time identifier by the mouse, and correspondingly playing the video content corresponding to the first time identifier in the first video.
It should be noted that other corresponding descriptions of the functional units related to the information display apparatus applicable to the user client side provided in this embodiment may refer to the corresponding descriptions in the information display method embodiment, and are not described herein again.
Further, as a specific implementation of the mutual information generation method, the embodiment of the present application provides a mutual information generation apparatus applicable to a server side, where the apparatus includes a generation module 51 and a sending module 52.
The generating module 51 may be configured to record, according to the position of the mobile object, a time point meeting the first condition or a time period meeting the first condition, and generate first time identification information;
a sending module 52, configured to send the first time identification information to the client.
In a specific application scenario, optionally, the first condition includes one or a combination of several ways, where a distance between the first moving object and the second moving object satisfies a third condition a; or, the relationship of the first moving object to the first position satisfies a third condition B; or, the density of moving objects within the first area satisfies a third condition C; or the moving speed of the first moving object satisfies the third condition D, or the relation between the position record of the moving object and the time satisfies the third condition E. In a specific application scene, the device also comprises a receiving module and an obtaining module;
the receiving module can be used for receiving the first time sent by the client;
the acquisition module is used for acquiring a positioning information time point corresponding to a third moving object in the first time according to the first time;
the acquisition module is also used for generating a position prompt information time point of the third moving object according to the positioning information;
the sending module 52 may be further configured to return the location hint information to the client.
In a specific application scenario, the obtaining module may be further configured to generate or reduce a thumbnail as the position prompt information according to the positioning information, where the thumbnail indicates a position of the third mobile object at the first time; or generating characters as the position prompt information according to the positioning information, wherein the characters indicate the position time point of the third moving object at the first time.
It should be noted that other corresponding descriptions of the functional units related to the interaction information generation apparatus applicable to the server side provided in this embodiment may refer to the corresponding descriptions in the interaction information generation method, and are not described herein again.
Based on the method shown above, correspondingly, the present application further provides a storage medium, on which a computer program is stored, and when the program is executed by a processor, the method for displaying information applicable to a user client side is implemented. Based on the above-mentioned method, the present application further provides another storage medium, on which a computer program is stored, and the program, when executed by a processor, implements the above-mentioned interaction information generation method applicable to the server side as shown in fig. 3.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of the embodiments of the present application.
Based on the method and the virtual device embodiment described above, to achieve the above object, the present application further provides a client device, which may be specifically a personal computer, a tablet computer, a smart phone, a smart watch, a smart bracelet, or other network devices, where the client device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the above-described information display method applicable to the user client side.
Based on the method and the virtual device embodiment shown above, in order to achieve the above object, the embodiment of the present application further provides a server device, which may be specifically a server or other network devices. The apparatus includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the above-described illustrated interaction information generation method applicable to the server side.
Optionally, both the two entity devices may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and the like. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
Those skilled in the art will appreciate that the physical device structure of the client device and the server device provided in the present embodiment is not limited to these two physical devices, and may include more or less components, or combine some components, or arrange different components.
The storage medium may further include an operating system and a network communication module. The operating system is a program that manages the hardware and software resources of the two physical devices described above, supporting the operation of the information processing program as well as other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and communication with other hardware and software in the information processing entity device.
Based on the above, further, an embodiment of the present application further provides a positioning system, where the system includes a positioning tag, a micro base station, a server, and a monitoring client, where the server is the server device, and the monitoring client is the client device.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus a necessary general hardware platform, and can also be implemented by hardware. Through the technical scheme of the embodiment, compared with the prior art, the embodiment can display the time identification information on the interactive interface of the video, and the time identification information can be used for indicating the time point when the position record of the mobile object meets the first condition or the time period when the position record of the mobile object meets the first condition, so that a user can timely know the time point or the time period when the position record of the mobile object meets the first condition, and can quickly position the specific position to be checked according to the time point or the time period to shoot the video, thereby improving the monitoring efficiency and accuracy and improving the watching experience of the user.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or processes in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (10)

1. An information display method, comprising:
generating first time identification information according to the position record and the video recording position of the mobile object;
displaying a first time identifier on an interactive interface, wherein the first time identifier is used for indicating a time point when the position record meets a first condition or a time period meeting the first condition;
the first time identifier corresponds to the first time identifier information.
2. The method of claim 1, wherein the first time indicator is displayed in a progress bar area or an area having a mapping relation with a progress bar.
3. An information display method, comprising:
displaying a first time identifier on the interactive interface, wherein the first time identifier is used for indicating a time point when the position record of the mobile object meets the first condition or a time period meeting the first condition.
4. The method of claim 3, wherein the first time indicator is displayed in a progress bar area or an area having a mapping relation with a progress bar.
5. An interactive information generating method, comprising:
recording a time point meeting a first condition or a time period meeting the first condition according to the position of the mobile object, and generating first time identification information;
and sending the first time identification information.
6. The method of claim 5, wherein the first condition comprises one or more of the following:
the distance between the first moving object and the second moving object meets a third condition A; or the like, or, alternatively,
the relationship of the first moving object to the first position satisfies a third condition B; or the like, or, alternatively,
the density of moving objects within the first region satisfies a third condition C; or the like, or, alternatively,
the moving speed of the first moving object satisfies a third condition D; or the like, or, alternatively,
the relation between the position record of the moving object and the time meets a third condition E; or
The trajectory of the first moving object meets the third condition F.
7. The method of claim 5, further comprising:
receiving first time sent by a client;
according to the first time, acquiring positioning information corresponding to a third moving object at the first time;
generating position prompt information of the third moving object according to the positioning information;
and returning the position prompt information to the client.
8. A client device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, wherein the processor implements the method of any one of claims 3 to 4 when executing the program.
9. A server device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, characterized in that the processor implements the method of any of claims 5 to 7 when executing the program.
10. A positioning system, comprising a positioning tag, a micro base station, a server, and a monitoring client, wherein the server is the apparatus of claim 9, and the monitoring client is the apparatus of claim 8.
CN202010188668.9A 2020-03-17 2020-03-17 Information display method, equipment and positioning system Pending CN113411654A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010188668.9A CN113411654A (en) 2020-03-17 2020-03-17 Information display method, equipment and positioning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010188668.9A CN113411654A (en) 2020-03-17 2020-03-17 Information display method, equipment and positioning system

Publications (1)

Publication Number Publication Date
CN113411654A true CN113411654A (en) 2021-09-17

Family

ID=77677195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010188668.9A Pending CN113411654A (en) 2020-03-17 2020-03-17 Information display method, equipment and positioning system

Country Status (1)

Country Link
CN (1) CN113411654A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101321271A (en) * 2007-06-08 2008-12-10 佳能株式会社 Information processing apparatus and information processing method
CN103165153A (en) * 2011-12-14 2013-06-19 中国电信股份有限公司 Method of playing video according to recording location trajectory and mobile video terminal
CN105844914A (en) * 2016-04-25 2016-08-10 深圳市双赢伟业科技股份有限公司 Road condition monitoring method and device
CN107734303A (en) * 2017-10-30 2018-02-23 北京小米移动软件有限公司 Video labeling method and device
CN110072150A (en) * 2019-05-30 2019-07-30 上海思岚科技有限公司 A kind of video broadcasting method, device, equipment and storage medium
CN110198487A (en) * 2019-05-30 2019-09-03 上海思岚科技有限公司 A kind of video broadcasting method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101321271A (en) * 2007-06-08 2008-12-10 佳能株式会社 Information processing apparatus and information processing method
CN103165153A (en) * 2011-12-14 2013-06-19 中国电信股份有限公司 Method of playing video according to recording location trajectory and mobile video terminal
CN105844914A (en) * 2016-04-25 2016-08-10 深圳市双赢伟业科技股份有限公司 Road condition monitoring method and device
CN107734303A (en) * 2017-10-30 2018-02-23 北京小米移动软件有限公司 Video labeling method and device
CN110072150A (en) * 2019-05-30 2019-07-30 上海思岚科技有限公司 A kind of video broadcasting method, device, equipment and storage medium
CN110198487A (en) * 2019-05-30 2019-09-03 上海思岚科技有限公司 A kind of video broadcasting method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
KR102337507B1 (en) Synchronized video with in game telemetry
US20220237227A1 (en) Method and apparatus for video searching, terminal and storage medium
CN203224887U (en) Display control device
CN111882582B (en) Image tracking correlation method, system, device and medium
CN113194349B (en) Video playing method, comment device, equipment and storage medium
US11676389B2 (en) Forensic video exploitation and analysis tools
KR20160097870A (en) System and method for browsing summary image
CN111124567B (en) Operation recording method and device for target application
CN111833454B (en) Display method, device, equipment and computer readable storage medium
CN105635519A (en) Video processing method, device and system
US11334621B2 (en) Image search system, image search method and storage medium
CN112817790A (en) Method for simulating user behavior
CN111290931B (en) Method and device for visually displaying buried point data
CN113709542A (en) Method and system for playing interactive panoramic video
JP2024502516A (en) Data annotation methods, apparatus, systems, devices and storage media
CN113345108B (en) Augmented reality data display method and device, electronic equipment and storage medium
CN110248235A (en) Software teaching method, apparatus, terminal device and medium
KR102069963B1 (en) Method of specifying number of person in moving object measurement system and area to be measured
Deffeyes Mobile augmented reality in the data center
CN113411654A (en) Information display method, equipment and positioning system
CN112288889A (en) Indication information display method and device, computer equipment and storage medium
CN111984126A (en) Answer record generation method and device, electronic equipment and storage medium
CN113411653A (en) Information display method, equipment and positioning system
KR101925181B1 (en) Augmented/virtual reality based dynamic information processing method and system
CN114286160A (en) Video playing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination