CN109873992B - Information processing method and device - Google Patents
Information processing method and device Download PDFInfo
- Publication number
- CN109873992B CN109873992B CN201910235331.6A CN201910235331A CN109873992B CN 109873992 B CN109873992 B CN 109873992B CN 201910235331 A CN201910235331 A CN 201910235331A CN 109873992 B CN109873992 B CN 109873992B
- Authority
- CN
- China
- Prior art keywords
- positioning device
- positioning
- camera
- information processing
- obtaining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- User Interface Of Digital Computer (AREA)
- Studio Devices (AREA)
Abstract
The present disclosure provides an information processing method. The method comprises the following steps: obtaining positioning display content, wherein the positioning display content comprises map content and a mark object of a positioning device; displaying the marker object in the map content based on the positioning display content; obtaining an input operation; and responding to the input operation, and acquiring a collected image collected by a camera of the positioning device in real time. The present disclosure also provides an information processing apparatus.
Description
Technical Field
The present disclosure relates to an information processing method and an information processing apparatus.
Background
More and more devices are currently available with positioning functions. The positioning of the equipment brings great convenience for people to find the equipment. For example, in the traffic field, the position information of the network appointment vehicle can be remotely obtained through the quick positioning of the network appointment vehicle; in the security domain, for example, when a device is lost, the device can be found by locating the device. However, in the prior art, the device is usually positioned by displaying the device on a map in a mark, and a user cannot obtain more specific scene information of the position of the device. Therefore, users often only know the general position information of the devices, and if the buildings, surrounding objects, and the like in the scene where the devices are located are densely distributed or have more similar objects, it is difficult to quickly and accurately find the devices to be located. For example, in the case of a net appointment, in a taxi taking peak time period, in some areas with dense taxi taking users, the situation that the net appointment arrives collectively may occur. The user can find the online appointment car very difficultly, and particularly in the case that the car taken by the user is an unmanned automatic driving car, the user cannot find the car through telephone communication with the driver.
Disclosure of Invention
A first aspect of the present disclosure provides an information processing method. Wherein the method comprises the following steps: obtaining positioning display content, wherein the positioning display content comprises map content and a mark object of a positioning device; displaying the marker object in the map content based on the positioning display content; obtaining an input operation; and responding to the input operation, and acquiring a collected image collected by a camera of the positioning device in real time.
Optionally, in the positioning presentation content, the position of the marker object in the map content is dynamically updated with positioning parameters obtained by a satellite positioning system of the positioning device.
Optionally, the input operation is a zoom-in operation on a part of the map content displayed in the current display area, wherein the part of the map content includes the mark object.
Optionally, the responding to the input operation comprises: and when the magnification ratio of the partial map content reaches a switching threshold value, acquiring an acquired image acquired by a camera of the positioning device in real time.
Optionally, the method further comprises: and obtaining the access authority of the camera of the positioning device so as to control the camera of the positioning device to acquire images in real time.
Optionally, the obtaining the access right of the camera of the positioning apparatus includes: registering and logging in based on identity authentication which is the same as the user account number for controlling the camera in the positioning device so as to obtain the authority of the camera of the positioning device; or sending a preset request to the cloud end, and acquiring the access right of the positioning device responding to the preset request fed back by the cloud end.
A second aspect of the present disclosure provides an information processing method. The method comprises the following steps: sending a predetermined request to a cloud; obtaining feedback information of the cloud, wherein the feedback information comprises access authority of the positioning device obtained in response to the predetermined request; acquiring a collected image collected by a camera of the positioning device in real time based on the access authority; and displaying the acquired image.
A third aspect of the present disclosure provides an information processing apparatus. The device comprises a first obtaining module, a marked object display module, a second obtaining module and a third obtaining module. The first obtaining module is used for obtaining positioning display content, and the positioning display content comprises map content and a mark object of a positioning device. The tagged object presentation module is configured to present the tagged object in the map content based on the positioning presentation content. The second obtaining module is used for obtaining input operation. The third obtaining module is used for responding to the input operation and obtaining a collected image collected by a camera of the positioning device in real time.
Optionally, the input operation is a zoom-in operation on a part of the map content displayed in the current display area, wherein the part of the map content includes the mark object.
A fourth aspect of the present disclosure provides an information processing apparatus. The device comprises a sending module, a fourth obtaining module, a fifth obtaining module and an image display module. The sending module is used for sending a predetermined request to the cloud. The fourth obtaining module is configured to obtain feedback information of the cloud, where the feedback information includes an access right of the location device obtained in response to the predetermined request. The fifth obtaining module is used for obtaining a collected image collected by a camera of the positioning device in real time based on the access authority. The image display module is used for displaying the collected image.
A fifth aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the information processing method provided by the first or second aspect of the present disclosure when executed.
A sixth aspect of the present disclosure provides a computer program comprising computer executable instructions for implementing the information processing method provided by the first or second aspect of the present disclosure when executed.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically shows a system architecture of an information processing method and apparatus according to an embodiment of the present disclosure;
FIG. 2A schematically illustrates a flow diagram of an information processing method according to an embodiment of the present disclosure;
FIG. 2B schematically shows an example of a presentation screen of the information processing method illustrated in FIG. 2A;
FIG. 3 schematically shows a flow chart of an information processing method according to another embodiment of the present disclosure;
FIG. 4 schematically shows a flow chart of an information processing method according to yet another embodiment of the present disclosure;
fig. 5 schematically shows a block diagram of an information processing apparatus according to an embodiment of the present disclosure;
fig. 6 schematically shows a block diagram of an information processing apparatus according to another embodiment of the present disclosure; and
FIG. 7 schematically shows a block diagram of a computer system suitable for implementing an information processing method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
The embodiment of the disclosure provides an information processing method and device. The information processing method includes: obtaining positioning display content, wherein the positioning display content comprises map content and a mark object of a positioning device; displaying a marker object in the map content based on the positioning display content; obtaining an input operation; and responding to the input operation to obtain a collected image collected by a camera of the positioning device in real time.
The embodiment of the disclosure also provides another information processing method and device. The information processing method comprises the following steps: sending a predetermined request to a cloud; obtaining feedback information of the cloud, wherein the feedback information comprises access authority of the positioning device obtained in response to a predetermined request; acquiring an acquired image acquired by a camera of the positioning device in real time based on the access right; and displaying the acquired image.
According to the embodiment of the disclosure, when the positioning device is positioned, the acquired image acquired by the camera of the positioning device in real time can be acquired, so that the scene graph of at least one direction around the positioning device can be acquired, and more accurate, specific and detailed positioning information can be provided for a user.
Fig. 1 schematically shows a system architecture of an information processing method and apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture may include a user terminal 10, a server 20, and a positioning device 30, wherein the positioning device 30 includes a camera 31. The server 20 and the user terminal 10, and the positioning device 30 may communicate with each other via a network (not shown).
The user terminal 10 may interact with the server 20 through a network to receive or transmit messages or the like. A client application may be installed in the user terminal 10. The user may operate in the client application, initiate a user request to the server 20, and receive response data from the server 20. The user can view the map content, or the captured image captured by the camera 31 in real time, etc. in the application interface of the client application.
The user terminal 10 may be an electronic device having a display function, such as a smart phone, a tablet computer, a laptop portable computer, a desktop computer, and the like. For example, the user terminal 10 of FIG. 1 may include a display 11. Map content or images according to embodiments of the present disclosure may be displayed in the display 11. In some embodiments, the user terminal 10 may also be AR (Augmented Reality) glasses. For example, map content or captured images according to embodiments of the present disclosure may be presented to a user through AR glasses. Thus, the user may look for the positioning device 30 based on the map content displayed by the AR eye or the captured image.
The server 20 may be a server providing various services, for example, receiving a user request sent by the user terminal 10, and acquiring a captured image captured by the camera 31 in the positioning device 30 or acquiring a control authority of the camera 31 in response to the user request, and feeding back the captured image to the user terminal 10.
The positioning device 30 may be any device having a positioning function. For example, the positioning device 30 may be an autonomous vehicle as shown in fig. 1, or may be a wristwatch, a mobile phone, or the like, in which a positioning device is installed. According to the embodiment of the present disclosure, the positioning device 30 further has a network module, through which communication with the server 20 can be performed, for example, a control signal of the server 20 to the camera 31 is received, and an image captured by the camera 31 is shared to the user terminal 10 according to the control signal.
It should be noted that the information processing method provided by the embodiment of the present disclosure may be generally executed by the server 20. Accordingly, the information processing apparatus provided by the embodiment of the present disclosure may be generally provided in the server 20. The information processing method provided by the embodiment of the present disclosure may also be executed partly by the server 20 and partly by the user terminal 10. Accordingly, the information processing apparatus provided by the embodiment of the present disclosure may also be partially disposed in the server 20 and partially disposed in the user terminal 10. The information processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 20 and is capable of communicating with the user terminal 10, and/or the positioning device 30, and/or the server 20. Accordingly, the information processing apparatus provided in the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 20 and capable of communicating with the user terminal 10, and/or the positioning apparatus 30, and/or the server 20.
It should be understood that the number and variety of user terminals, servers, and positioning devices in fig. 1 are merely illustrative. There may be any number and variety of user terminals, servers, and positioning devices, as desired for an implementation.
Fig. 2A schematically illustrates a flow chart of an information processing method according to an embodiment of the present disclosure.
Fig. 2B schematically shows an example of a presentation screen of the information processing method illustrated in fig. 2A. Fig. 2B illustrates a process in which the screen in the display 11 of the user terminal 10 jumps from the screen of part (a) to the screen of part (B) based on the touch operation of the user.
With reference to fig. 2A and 2B, the information processing method includes operations S201 to S204.
In operation S201, a positioning presentation content is obtained, which includes map content and a mark object of the positioning apparatus 30.
In operation S202, a mark object is presented in the map content based on the positioning presentation content, and a presentation effect refers to a screen of part (a) in fig. 2B, for example. The screen of part (a) in fig. 2B includes map content 111, and a marker object 112 is displayed in the map content 111.
In the positioning presentation, the position of the marker object 112 in the map content 111 is dynamically updated with positioning parameters obtained by the satellite positioning system of the positioning device 30 according to an embodiment of the present disclosure.
In operation S203, an input operation is obtained. According to an embodiment of the present disclosure, the input operation may be a zoom-in operation on the presented partial map content 111 in the current presentation area, wherein the mark object 112 is included in the partial map content 111. According to an embodiment of the present disclosure, the operation object of the input operation is the mark object 112. Referring to fig. 2b (a), the user operates the currently displayed map content 111 by the touch operation of the finger to enlarge the map content 111, wherein the object of the touch operation is the mark object 112.
In operation S204, in response to the input operation, the captured image 121 captured in real time by the camera 31 of the positioning device 30 is obtained. After the user terminal 10 acquires the captured image 121, the captured image may be displayed on the display 11 of the user terminal 10, and an effect, for example, a picture in part (B) in fig. 2B, is displayed.
According to an embodiment of the present disclosure, responding to the input operation in operation S204 includes: when the magnification of the partial map content 111 reaches the switching threshold, the captured image 121 captured in real time by the camera 31 of the positioning device 30 is obtained. The captured image 121 may provide a map of a scene of at least one orientation around the positioning device 30, thereby helping a user to more quickly locate the positioning device 30. For example, when a user uses taxi taking software to take a taxi, in some places where a taxi appointment occurs very intensively, an image taken by the camera 31 of the vehicle (for example, other vehicles around the vehicle and a scene map of a building can be included) can be displayed in the user terminal 10, so that the user can be helped to quickly locate the position of the vehicle to be found. The automatic driving automobile is provided with a perfect sensor system, and scene graphs of the automatic driving automobile and surrounding vehicles, buildings and the like can be rapidly drawn through the sensor system (for example, based on images collected by the camera 31), wherein the position of the automatic driving automobile can be highlighted, and then the drawn scene graphs are shared with users.
According to the embodiment of the present disclosure, not only the geographic coordinate position of the positioning device 30 can be obtained according to the position of the mark object 112 in the map content 111, but also the captured image 121 captured by the camera 31 of the positioning device 30 in real time can be obtained based on the input operation of the user. Since the captured image 121 may provide rich visual information related to the position of the positioning device 30, more detailed position information about the positioning device 30 may be obtained, which helps the user to determine the position of the positioning device 30 more quickly.
Fig. 3 schematically shows a flow chart of an information processing method according to another embodiment of the present disclosure.
As shown in fig. 3, according to another embodiment of the present disclosure, the information processing method may further include operation S303 in addition to operations S201 to S204, wherein operation S303 is performed at least before operation S204.
In operation S303, an access right of the camera 31 of the positioning device 30 is obtained to control the camera 31 of the positioning device 30 to capture an image in real time. In one embodiment, the input operation obtained in operation S203 serves as a trigger condition for obtaining the access right of the camera 31 of the pointing device 30 in operation S303. In one embodiment, operation 303 may be triggered based on other specific user requests. In one embodiment, operation S203 may be performed in synchronization with operation S201, that is, when communication is performed with the positioning apparatus 30, the access right of the camera 31 of the positioning apparatus 30 may be acquired.
According to the embodiment of the present disclosure, the access right of the camera 31 of the positioning device 30 obtained in operation S303 may be obtained by registering and logging in based on the same identity authentication as the user account controlling the camera 31 in the positioning device 30, or may also be obtained by sending a predetermined request to the cloud and obtaining the access right of the positioning device 30 obtained by responding to the predetermined request and fed back by the cloud.
For example, when the positioning device 30 is an apple phone, the user may log in the applet id of the apple phone in the user terminal 10 to acquire the control authority of the camera of the apple phone. According to the embodiment of the present disclosure, by acquiring the control authority of the camera 31, the camera 31 can be controlled to be opened to acquire an image in real time, and even in some embodiments, the shooting direction of the camera 31 can be controlled.
As another example, the positioning device 30 is a net appointment vehicle. The user can log in the client application in the user terminal 10 based on the identity authentication, and acquire the control authority of the camera 31 of the online taxi appointment through the client application based on the account passing the identity authentication. Thus, the user can see the tagged objects 112 of the appointment in his or her account. When the user continuously enlarges the map content 111 in the user terminal 10 and reaches the switching threshold, the captured image 121 captured by the camera 31 of the taxi appointment in real time can be obtained. In other words, the user can only see the captured image 121 of the camera 31 of the network appointment car in his own order.
FIG. 4 schematically shows a flow chart of an information processing method according to still another embodiment of the present disclosure
As shown in fig. 4, the information processing method may include operations S401 to S404.
In operation S401, a reservation request is transmitted to the cloud.
In operation S402, feedback information of the cloud is obtained, wherein the feedback information includes the access right of the positioning device 30 obtained in response to the predetermined request.
In operation S403, a captured image 121 captured in real time by the camera 31 of the positioning device 30 is obtained based on the access right.
In operation S404, the captured image 121 is presented.
According to the embodiment of the disclosure, the control authority of the camera 31 of the positioning device 30 can be acquired through a cloud, and then the acquired image 121 acquired by the camera 31 in real time is acquired. The captured image 121 may provide more specific information about the environment surrounding the positioning device 30, which may help the user to more quickly determine the position of the positioning device 30.
Fig. 5 schematically shows a block diagram of an information processing apparatus 500 according to an embodiment of the present disclosure;
as shown in fig. 5, the information processing apparatus 500 may include a first obtaining module 510, a mark object presenting module 520, a second obtaining module 530, and a third obtaining module 540. The information processing apparatus 500 may be used to implement the information processing method described with reference to fig. 2A to 3.
The first obtaining module 510 is configured to obtain the positioning presentation content, which includes the map content 111 and the mark object 112 of the positioning apparatus 30.
The tagged object presentation module 520 is configured to present the tagged object 112 in the map content 111 based on the positioning presentation content.
The second obtaining module 530 is used for obtaining an input operation. According to an embodiment of the present disclosure, the input operation is an enlargement operation on the presented partial map content 111 in the current presentation area, wherein the mark object 112 is included in the partial map content 111.
The third obtaining module 540 is configured to obtain the captured image 121 captured by the camera 31 of the positioning apparatus 30 in real time in response to the input operation.
Fig. 6 schematically shows a block diagram of an information processing apparatus 600 according to another embodiment of the present disclosure.
As shown in fig. 6, the information processing apparatus 600 may include a transmitting module 610, a fourth obtaining module 620, a fifth obtaining module 630, and an image presentation module 640. The information processing apparatus 600 may be used to implement the information processing method described with reference to fig. 4.
Specifically, the sending module 610 is configured to send a reservation request to the cloud.
The fourth obtaining module 620 is configured to obtain feedback information of the cloud, where the feedback information includes access rights of the positioning device 30 obtained in response to the predetermined request.
The fifth obtaining module 630 is configured to obtain the captured image 121 captured by the camera 31 of the positioning apparatus 30 in real time based on the access right.
The image display module 640 is used for displaying the captured image 121.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the first obtaining module 510, the markup object presentation module 520, the second obtaining module 530, the third obtaining module 540, the sending module 610, the fourth obtaining module 620, the fifth obtaining module 630, and the image presentation module 640 may be combined in one module to be implemented, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the first obtaining module 510, the marked object exhibiting module 520, the second obtaining module 530, the third obtaining module 540, the sending module 610, the fourth obtaining module 620, the fifth obtaining module 630, and the image exhibiting module 640 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or by a suitable combination of any of them. Alternatively, at least one of the first obtaining module 510, the markup object presentation module 520, the second obtaining module 530, the third obtaining module 540, the sending module 610, the fourth obtaining module 620, the fifth obtaining module 630, and the image presentation module 640 may be at least partially implemented as a computer program module, which may perform corresponding functions when executed.
FIG. 7 schematically shows a block diagram of a computer system 700 suitable for implementing an information processing method according to an embodiment of the present disclosure. The computer system 700 shown in fig. 7 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present disclosure.
As shown in fig. 7, computer system 700 includes a processor 710, a computer-readable storage medium 720, a display 730, and a signal receiver 740. The computer system 700 may perform an information processing method according to an embodiment of the present disclosure.
In particular, processor 710 may comprise, for example, a general purpose microprocessor, an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 710 may also include on-board memory for caching purposes. Processor 710 may be a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
Computer-readable storage medium 720, for example, may be a non-volatile computer-readable storage medium, specific examples including, but not limited to: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and so on.
The computer-readable storage medium 720 may include a computer program 721, which computer program 721 may include code/computer-executable instructions that, when executed by the processor 710, cause the processor 710 to perform a method according to an embodiment of the disclosure, or any variation thereof.
The computer program 721 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 721 may include one or more program modules, including 721A, modules 721B, … …, for example. It should be noted that the division and number of modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, so that the processor 710 may execute the method according to the embodiment of the present disclosure or any variation thereof when the program modules are executed by the processor 710.
According to an embodiment of the disclosure, the processor 710 may interact with the display 730 and the signal receiver 740 to perform a method according to an embodiment of the disclosure or any variant thereof. The display 730 may be used to show the captured image 121 or the map content 111, for example. The signal receiver 740 may be used to receive external signals, for example.
According to an embodiment of the present disclosure, at least one of the first obtaining module 510, the markup object presentation module 520, the second obtaining module 530, the third obtaining module 540, the sending module 610, the fourth obtaining module 620, the fifth obtaining module 630, and the image presentation module 640 may be implemented as a computer program module described with reference to fig. 7, which, when executed by the processor 710, may implement the corresponding operations described above.
Embodiments provided by the present disclosure operate (e.g., click, etc.) any of the tagged objects 112 used to characterize a movable pointing device 30 (e.g., automobile, pointing device, etc.) in a map to directly obtain a captured image 121 captured by the camera 31 of the pointing device 30. Therefore, the position of the positioning device 30 can be obtained more accurately, and the user can find the positioning device 30 in a real scene conveniently. The application scenario of the embodiment provided by the present application may be for a vehicle subscribed on the internet, when the map is zoomed in to the switching threshold, automatically switching to the captured image 121 captured by the camera 31 of the positioning device 30, and in general, the user can locate the position of the positioning device 30 before zooming in to the switching threshold for the planar map, but if the user zooms in to the switching threshold, it indicates that the previously zoomed planar map cannot meet the requirement of the user to locate or find the positioning device 30. At this time, the captured image 121 captured by the camera 31 of the positioning device 30 is naturally switched to, and the accurate position of the positioning device 30 can be further determined by the image of the positioning device 30 in the real scene on the impression of the original enlarged plan map. Of course, the platform of the online appointment car feeds back the vehicle information responding to the order to the registered user (the user representing the predetermined vehicle), so that the registered user can obtain the camera of the vehicle (of course, the unmanned vehicle) which successfully receives the order, and thus obtain the image collected by the camera of the vehicle. The registered user may obtain and display the captured image for viewing prior to boarding the vehicle, or may continue to obtain and display after boarding the vehicle.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.
Claims (7)
1. An information processing method, wherein the method comprises:
obtaining positioning display content, wherein the positioning display content comprises map content and a mark object of a positioning device;
displaying the marker object in the map content based on the positioning display content;
obtaining an input operation, wherein the input operation is an amplification operation on displayed partial map content in a current display area, and the partial map content comprises the mark object; and
responding to the input operation, acquiring a collected image collected by a camera of the positioning device in real time, wherein the collected image comprises a scene graph of at least one position around the positioning device;
wherein, in the positioning display content, the position of the mark object in the map content is dynamically updated along with the positioning parameters obtained by the satellite positioning system of the positioning device.
2. The information processing method according to claim 1, wherein the responding to the input operation includes:
and when the magnification ratio of the partial map content reaches a switching threshold value, acquiring an acquired image acquired by a camera of the positioning device in real time.
3. The information processing method according to claim 2, wherein the method further comprises:
and obtaining the access authority of the camera of the positioning device so as to control the camera of the positioning device to acquire images in real time.
4. The information processing method of claim 3, wherein the obtaining access rights of the camera of the positioning device comprises:
registering and logging in based on identity authentication which is the same as the user account number for controlling the camera in the positioning device so as to obtain the authority of the camera of the positioning device; or
And sending a predetermined request to a cloud end, and acquiring the access authority of the positioning device responding to the predetermined request fed back by the cloud end.
5. The information processing method according to any one of claims 1 to 4, wherein the method further comprises:
sending a predetermined request to a cloud;
obtaining feedback information of the cloud, wherein the feedback information comprises access authority of the positioning device obtained in response to the predetermined request;
acquiring a collected image collected by a camera of the positioning device in real time based on the access right, wherein the collected image comprises a scene graph of at least one position around the positioning device; and
and displaying the collected image.
6. An information processing apparatus, wherein the apparatus comprises:
the system comprises a first obtaining module, a second obtaining module and a positioning module, wherein the first obtaining module is used for obtaining positioning display content, and the positioning display content comprises map content and a mark object of a positioning device;
a tagged object presentation module for presenting the tagged object in the map content based on the positioning presentation content; in the positioning display content, dynamically updating the position of the mark object in the map content along with the positioning parameters obtained by a satellite positioning system of the positioning device;
a second obtaining module, configured to obtain an input operation, where the input operation is an operation of zooming in a part of the map content displayed in a current display area, where the part of the map content includes the mark object;
and the third obtaining module is used for responding to the input operation and obtaining a collected image collected by a camera of the positioning device in real time, wherein the collected image comprises a scene graph of at least one position around the positioning device.
7. The information processing apparatus according to claim 6, wherein the apparatus further comprises:
the sending module is used for sending a predetermined request to the cloud end;
a fourth obtaining module, configured to obtain feedback information of the cloud, where the feedback information includes an access right of the positioning apparatus obtained in response to the predetermined request;
a fifth obtaining module, configured to obtain, based on the access right, a captured image that is captured by a camera of the positioning device in real time, where the captured image includes a scene graph of at least one position around the positioning device; and
and the image display module is used for displaying the acquired image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910235331.6A CN109873992B (en) | 2019-03-26 | 2019-03-26 | Information processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910235331.6A CN109873992B (en) | 2019-03-26 | 2019-03-26 | Information processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109873992A CN109873992A (en) | 2019-06-11 |
CN109873992B true CN109873992B (en) | 2021-12-24 |
Family
ID=66921365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910235331.6A Active CN109873992B (en) | 2019-03-26 | 2019-03-26 | Information processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109873992B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105306899A (en) * | 2015-10-27 | 2016-02-03 | 太原市公安局 | Monitoring video processing method and device |
CN107357829A (en) * | 2017-06-19 | 2017-11-17 | 北京小米移动软件有限公司 | Scaling processing method and processing device |
RU2667035C1 (en) * | 2017-05-17 | 2018-09-13 | Общество С Ограниченной Ответственностью "Авп Технология" | System of remote control and informing the machinist about the railway-crossing occupancy |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103249142B (en) * | 2013-04-26 | 2016-08-24 | 东莞宇龙通信科技有限公司 | Positioning method, system and mobile terminal |
CN104570801B (en) * | 2014-12-30 | 2019-06-07 | 深圳市科漫达智能管理科技有限公司 | A kind of apparatus control method and device |
CN105072572A (en) * | 2015-06-30 | 2015-11-18 | 上海卓易科技股份有限公司 | Position displaying method and electronic equipment |
CN105141837A (en) * | 2015-08-11 | 2015-12-09 | 广东欧珀移动通信有限公司 | Loss tracking method and user terminal |
US9565521B1 (en) * | 2015-08-14 | 2017-02-07 | Samsung Electronics Co., Ltd. | Automatic semantic labeling based on activity recognition |
CN105120518B (en) * | 2015-09-07 | 2017-08-08 | 广东欧珀移动通信有限公司 | A kind of indoor orientation method and user terminal |
CN105204054A (en) * | 2015-10-16 | 2015-12-30 | 北京机械设备研究所 | Cluster positioning and commanding system |
CN105809695A (en) * | 2016-03-11 | 2016-07-27 | 深圳还是威健康科技有限公司 | Terminal searching method and device based on wearable device |
CN106027108B (en) * | 2016-08-16 | 2018-05-18 | 广东欧珀移动通信有限公司 | A kind of indoor orientation method, device and wearable device and mobile terminal |
CN106679665B (en) * | 2016-12-13 | 2023-03-10 | 腾讯科技(深圳)有限公司 | Route planning method and device |
CN107808117A (en) * | 2017-09-29 | 2018-03-16 | 上海工程技术大学 | A kind of shared Vehicle positioning system and its localization method based on cloud computing |
CN107818610A (en) * | 2017-11-22 | 2018-03-20 | 深圳市沃特沃德股份有限公司 | The method and device of remotely controlling vehicle mounted systematic collaboration drive recorder shooting |
CN108366338B (en) * | 2018-01-31 | 2021-07-16 | 联想(北京)有限公司 | Method and device for searching electronic equipment |
CN109040960A (en) * | 2018-08-27 | 2018-12-18 | 优视科技新加坡有限公司 | A kind of method and apparatus for realizing location-based service |
-
2019
- 2019-03-26 CN CN201910235331.6A patent/CN109873992B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105306899A (en) * | 2015-10-27 | 2016-02-03 | 太原市公安局 | Monitoring video processing method and device |
RU2667035C1 (en) * | 2017-05-17 | 2018-09-13 | Общество С Ограниченной Ответственностью "Авп Технология" | System of remote control and informing the machinist about the railway-crossing occupancy |
CN107357829A (en) * | 2017-06-19 | 2017-11-17 | 北京小米移动软件有限公司 | Scaling processing method and processing device |
Non-Patent Citations (1)
Title |
---|
浅谈3D数据在导航电子地图中的运用;刘艳琴;《数字技术与应用》;20110815;178-179 * |
Also Published As
Publication number | Publication date |
---|---|
CN109873992A (en) | 2019-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10181211B2 (en) | Method and apparatus of prompting position of aerial vehicle | |
US12106409B2 (en) | Server, user terminal, and service providing method, and control method thereof | |
US9080877B2 (en) | Customizing destination images while reaching towards a desired task | |
US10650598B2 (en) | Augmented reality-based information acquiring method and apparatus | |
KR101769852B1 (en) | System for mediating real estate trading using dron | |
US20200106818A1 (en) | Drone real-time interactive communications system | |
US20170161958A1 (en) | Systems and methods for object-based augmented reality navigation guidance | |
EP1692863B1 (en) | Device, system, method and computer software product for displaying additional information in association with the image of an object | |
CN104105064A (en) | Device locating method and device | |
KR20190018243A (en) | Method and system for navigation using video call | |
JP6940223B2 (en) | Information processing equipment, information processing system and information processing method | |
KR101702773B1 (en) | System and method for providing location information using image in communication session | |
CN107306345A (en) | Traveling record processing method, device, equipment, operating system and the vehicles | |
KR20200095095A (en) | Augmented reality image marker lock | |
CN114727146A (en) | Information processing method, device, equipment and storage medium | |
CN108346179B (en) | AR equipment display method and device | |
US10338768B1 (en) | Graphical user interface for finding and depicting individuals | |
KR20170086320A (en) | Location based image providing system | |
CN104299434A (en) | Road condition obtaining-presenting method and device | |
CN111158541A (en) | Space display method and device, electronic equipment and medium | |
CN109873992B (en) | Information processing method and device | |
KR102013728B1 (en) | Apparatus and method for sharing disaster situation information | |
CN112432636A (en) | Positioning method and device, electronic equipment and storage medium | |
CN109900286A (en) | Air navigation aid, server and navigation system | |
US20170270139A1 (en) | Location-Based On-The-Spot Image Provision System and Method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |