CN114726999A - Image acquisition method, image acquisition device and storage medium - Google Patents

Image acquisition method, image acquisition device and storage medium Download PDF

Info

Publication number
CN114726999A
CN114726999A CN202110005582.2A CN202110005582A CN114726999A CN 114726999 A CN114726999 A CN 114726999A CN 202110005582 A CN202110005582 A CN 202110005582A CN 114726999 A CN114726999 A CN 114726999A
Authority
CN
China
Prior art keywords
navigation
image
camera application
information
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110005582.2A
Other languages
Chinese (zh)
Other versions
CN114726999B (en
Inventor
李鑫
宋宇明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110005582.2A priority Critical patent/CN114726999B/en
Publication of CN114726999A publication Critical patent/CN114726999A/en
Application granted granted Critical
Publication of CN114726999B publication Critical patent/CN114726999B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3647Guidance involving output of stored or live camera images or video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The present disclosure relates to an image capturing method, an image capturing apparatus, and a storage medium. The image acquisition method is applied to a terminal, the terminal comprises a camera application, the camera application has an image acquisition function, and the image acquisition method comprises the following steps: acquiring a preview image acquired by a camera application in real time under the condition of receiving a navigation instruction for triggering the camera application to execute a navigation function; the navigation instruction comprises a navigation target address. And under the condition that the preset mark object exists in the preview image, calling a map navigation engine, and acquiring navigation map information determined by the map navigation engine based on the preset mark object and the target address. And displaying the navigation guide information on a preview interface of the camera application based on the navigation map information and the preview image. According to the image acquisition method provided by the disclosure, functions which can be executed by the camera application based on the acquired image can be enriched according to the preview image acquired in real time and the called map navigation engine, and further the practical value of the camera application is enhanced.

Description

Image acquisition method, image acquisition device and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image capturing method, an image capturing apparatus, and a storage medium.
Background
With the development of science and technology, personalized functions and applications of terminals gradually bring various convenience to users. The camera application function of the terminal is to take a picture and record a video by using the camera application in the terminal. However, in the related art, the functions that can be executed by the camera application are all related to shooting, the functions are single, and the practical value is not high.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image capturing method, an image capturing apparatus, and a storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided an image capturing method applied to a terminal, where the terminal includes a camera application having an image capturing function, the image capturing method including: acquiring a preview image acquired by the camera application in real time under the condition of receiving a navigation instruction triggering the camera application to execute a navigation function; the navigation instruction comprises a navigation target address. And under the condition that a preset mark object exists in the preview image, calling a map navigation engine, and acquiring navigation map information determined by the map navigation engine based on the preset mark object and the target address. And displaying navigation guide information on a preview interface of the camera application based on the navigation map information and the preview image.
In an embodiment, before the map navigation engine is called when the preset flag object exists in the image, the image capturing method further includes: and carrying out scene recognition on the preview image, and determining a preset mark object included in the current scene based on a scene recognition result.
In another embodiment, after the map navigation engine is called when the preset mark object exists in the preview image, the image capturing method further includes: and acquiring mark object information determined by the map navigation engine based on the matching of the panoramic map of the current position and a preset mark object. And if a trigger instruction of mark object associated information of a preset mark object is detected, based on the mark object information, requesting a cloud server to acquire mark object associated information associated with the preset mark object, and displaying the mark object associated information acquired from the cloud server.
In another embodiment, the image capturing method further includes: and determining a virtual navigation image and a virtual navigation indicator which are matched with the navigation guide information. And displaying the virtual navigation image and the virtual navigation indication mark in navigation guide information displayed on a preview interface of the camera application.
In another embodiment, the image capturing method further includes: and if a scene recognition result obtained by performing scene recognition on the preview image represents that the current position and the target address are different positions of the same building, updating and displaying the navigation guidance information, wherein the updated navigation guidance information comprises a navigation route between the entrance and exit position of the building and the target address.
According to a second aspect of the embodiments of the present disclosure, there is provided an image capturing apparatus applied to a terminal, the terminal including a camera application having an image capturing function, the image capturing apparatus including: the camera application comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a preview image acquired by the camera application in real time under the condition of receiving a navigation instruction for triggering the camera application to execute a navigation function; the navigation instruction comprises a navigation target address. And the calling unit is used for calling a map navigation engine under the condition that a preset mark object exists in the preview image, and acquiring navigation map information determined by the map navigation engine based on the preset mark object and the target address. And the display unit is used for displaying navigation guide information on a preview interface of the camera application based on the navigation map information and the preview image.
In one embodiment, the image capturing apparatus further comprises: and the identification unit is used for carrying out scene identification on the preview image and determining a preset mark object included in the current scene based on a scene identification result.
In another embodiment said obtaining unit is further adapted to: and acquiring mark object information determined by the map navigation engine based on the matching of the panoramic map of the current position and a preset mark object. And if a trigger instruction of mark object associated information of a preset mark object is detected, based on the mark object information, requesting a cloud server to acquire mark object associated information associated with the preset mark object, and displaying the mark object associated information acquired from the cloud server.
In yet another embodiment, the display unit is further configured to: and determining a virtual navigation image and a virtual navigation indicator which are matched with the navigation guide information. And displaying the virtual navigation image and the virtual navigation indication mark in navigation guide information displayed on a preview interface of the camera application.
In yet another embodiment, the image capturing apparatus further includes: and the updating unit is used for updating and displaying the navigation guidance information if a scene recognition result obtained by performing scene recognition on the preview image acquired by the camera application in real time represents that the current position and the target address are different positions of the same building, wherein the updated navigation guidance information comprises a navigation route between the entrance and exit position of the building and the target address.
According to a third aspect of the embodiments of the present disclosure, there is provided an image capturing apparatus including: a memory to store instructions; and the processor is used for calling the instruction stored in the memory to execute any one of the image acquisition methods.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein instructions which, when executed by a processor, perform any one of the image capturing methods described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: according to the image acquisition method provided by the disclosure, based on the received navigation instruction, the map navigation engine is called while the camera application can acquire the preview image in real time, so that the preview interface of the camera application can display the preview image and can also display navigation guide information related to the target address, thereby enriching the functions which can be executed by the camera application, providing multiple function selections for a user and enhancing the practical value of the camera application.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating an image acquisition method according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating another method of image acquisition according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating yet another method of image acquisition according to an exemplary embodiment.
FIG. 4 is a flow chart illustrating yet another method of image acquisition according to an exemplary embodiment.
FIG. 5 is a flow chart illustrating yet another method of image acquisition according to an exemplary embodiment.
FIG. 6 is a flow chart illustrating yet another method of image acquisition according to an exemplary embodiment.
FIG. 7 is a flow chart illustrating yet another method of image acquisition according to an exemplary embodiment.
FIG. 8 illustrates a navigation interaction diagram in accordance with an exemplary embodiment.
FIG. 9 is a block diagram illustrating an image capture device according to an exemplary embodiment.
FIG. 10 is a block diagram illustrating an image capture device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
In the related art, when a user uses an application having an image capturing function, only the functions related to image capturing, such as taking a picture or recording a video, can be performed. The execution function is single, and therefore the practical value of the application is not high.
In view of this, the present disclosure provides an image capturing method, which enables a camera application to capture a preview image in real time and call a map navigation engine according to a navigation instruction triggered by a user to execute a navigation function, so that a preview interface of the camera application can display navigation guidance information related to a target address. When the user uses the camera application, the shooting function can be executed, the navigation function can be executed, and then various executable functions with different types are provided for the user to use the camera application, so that the function selection of the camera application is enriched, and the practical value of the camera application is enhanced.
The image acquisition method provided in the present disclosure is applied to a terminal. In one example, the category of terminals may include mobile terminals, such as: cell phones, tablets, ipods, etc. In another example. The structure of the terminal may include: a dual-screen terminal, a folding screen terminal, a full-screen terminal, etc.
Fig. 1 is a flowchart illustrating an image capturing method according to an exemplary embodiment, as shown in fig. 1, the image capturing method is used in a terminal, and includes the following steps S11 through S13.
In step S11, in the case of receiving a navigation instruction that triggers the camera application to execute a navigation function, a preview image captured in real time by the camera application is acquired.
In the embodiment of the disclosure, in the camera application, the navigation function is started to be executed based on the navigation instruction triggered by the user. The navigation instruction comprises a target address needing to be navigated, namely an address which needs to be inquired or arrived by a user, and then the camera application can display navigation guide information related to the target address according to the target address in the navigation instruction according to the received navigation instruction triggered by the user.
The camera application acquires the preview image acquired by the camera application in real time after receiving the navigation instruction, so that the camera application can quickly position the position information of the current terminal and quickly determine the navigation map information based on the preview image acquired by the camera application when calling the map navigation engine.
In step S12, when the preset flag object exists in the preview image, the map navigation engine is called, and the navigation map information determined by the map navigation engine based on the preset flag object and the target address is acquired.
In the embodiment of the disclosure, according to the preview image acquired by the terminal in real time, the preset mark object around the current position of the terminal can be determined. The preset mark object may include a road, a mark building, and the like. When the preset mark object exists in the preview image, the map navigation engine can be called in an asynchronous loading mode to execute the navigation function of the camera application. Through the map navigation engine, the specific information of the current position in the map can be determined based on the preset mark object in the preview image, and the navigation map information reaching the target address from the current position can be planned by combining the target address in the navigation instruction. The navigation map information may characterize a navigation route from a current location to a target address.
In step S13, navigation guidance information is displayed on a preview interface of the camera application based on the navigation map information and the preview image.
In the embodiment of the disclosure, the navigation map information is combined with the preview image to obtain the navigation guidance information for display, and the navigation guidance information is displayed through a preview interface of the camera application. When the user navigates according to the displayed navigation guidance information, the user can quickly and intuitively determine the surrounding road conditions and the surrounding road information in the navigation guidance information fused with the real scene, and the use experience of the user is further improved. In one example, the navigation guidance information may be navigation map information including a target route direction for guiding the user how to reach the target address. The change of the direction of the navigation route can be set and recalled in the map navigation engine in advance to monitor the moving direction of the terminal in time, so that the user can be guided in real time according to the moving direction of the user in the navigation process, and the user is prevented from deviating the navigation route.
By the embodiment, the camera application can execute the navigation function through the preview image acquired in real time and the map navigation engine called under the condition that the navigation instruction triggered by the user is received, so that the preview interface of the camera application can display the preview image acquired in real time and can also display navigation guide information, and navigation service is provided for the user. The functions that the camera application can execute are enriched, and therefore the practicability of the camera application is enhanced.
In an embodiment, the preview image collected in real time may be fused with navigation map information based on an Augmented Reality (AR) technology, so that a user may know real road condition information based on navigation guidance information displayed on a preview interface of a camera application, thereby improving authenticity and accuracy of navigation.
In another embodiment, to enable the camera application to perform the navigation function, a navigation mode may be added in the camera application, and a map navigation Software Development Kit (SDK) is integrated in the camera application, thereby enriching the functions of the camera application and enhancing the practicability of the camera. And when the user needs to navigate through the terminal, a plurality of possibilities can be provided for the application selection of the user. In one example, the map navigation engine may be equivalent to a map navigation SDK.
In yet another embodiment, the navigation instruction to perform the navigation function may be sent to the camera application by any one of voice recognition, text box input, or recognition of the scanned two-dimensional code. In one example, the camera application may further integrate a voice recognition SDK, and further may recognize the target address through a navigation instruction triggered by a user voice. In another example, when the navigation command is triggered based on the input of the text box, if the user manually inputs the command, the target address is directly obtained. And if the user inputs the target address into the text box through the clipboard in a cut mode, the target address is obtained by analyzing the clipboard.
In yet another embodiment, the user may trigger a navigation instruction of the camera application to perform a navigation function through the navigation mode entry. In one example, the navigation mode entry may be disposed inside the camera application, that is, when the camera application is in a running state, the navigation mode entry is provided through a currently displayed application interface, and a navigation instruction for the camera application to execute a navigation function is triggered. In another example, the navigation mode entry may be set in a search box on a terminal desktop, and when a target address is searched through the search box provided on the terminal desktop, the camera application is skipped and started, and the camera application is triggered to execute a navigation instruction of the navigation function.
In yet another embodiment, whether the camera application performs the navigation function may be determined by determining whether the navigation instruction includes the target address. And if the navigation instruction does not contain the target address, the camera application is not triggered to execute the navigation function. And if the navigation instruction contains the target address, triggering the camera application to execute the navigation function.
Fig. 2 is a flowchart illustrating another image capturing method according to an exemplary embodiment, and as shown in fig. 2, a method of displaying navigation map information includes the following steps S21 through S24.
In step S21, in the case of receiving a navigation instruction that triggers the camera application to execute a navigation function, a preview image captured in real time by the camera application is acquired.
In step S22, scene recognition is performed on the preview image, and a preset flag object included in the current scene is determined based on the scene recognition result.
In the embodiment of the disclosure, in order to facilitate that the map navigation engine can quickly locate the current position, and determine navigation map information according to the current position and the target address, scene recognition is performed on a preview image acquired by a camera application in real time, and whether a preset mark object is included in the preview image is recognized, so that the preset mark object included in the current scene is determined based on the preset mark object recognized in the preview image, and the map navigation engine can quickly locate according to the determined preset mark object. The preset mark object may include a road, a mark building and other objects with marks.
In an example, scene recognition may be performed on a preview image obtained in real time based on an Artificial Intelligence (AI) algorithm, an object texture of a real scene in the preview image is extracted, and a preset mark object included in a current scene is determined according to the object texture of the real scene.
In step S23, when the preset flag object exists in the preview image, the map navigation engine is called, and the navigation map information determined by the map navigation engine based on the preset flag object and the target address is acquired.
In step S24, navigation guidance information is displayed on a preview interface of the camera application based on the navigation map information and the preview image.
In an embodiment, according to the identified preset mark object, the preset mark object can be displayed together when the navigation guidance information is displayed, so that a user can quickly combine a real scene with the navigation guidance information according to the displayed preset mark object when the user uses the navigation guidance information to navigate, and further the intuitiveness of navigation is improved.
Fig. 3 is a flowchart illustrating still another image capturing method according to an exemplary embodiment, and as shown in fig. 3, a method of displaying navigation map information includes the following steps S31 through S37.
In step S31, in the case of receiving a navigation instruction that triggers the camera application to execute a navigation function, a preview image captured in real time by the camera application is acquired.
In step S32, scene recognition is performed on the preview image, and a preset flag object included in the current scene is determined based on the scene recognition result.
In step S33, when the preset flag object exists in the preview image, the map navigation engine is called.
In step S34, landmark object information determined by the map navigation engine based on matching of the panoramic map of the current location with a preset landmark object is acquired.
In the embodiment of the present disclosure, in order to improve the intuitiveness of navigation and enhance the practical value of navigation guidance information, the map navigation engine may be matched with the identified preset mark object based on the panoramic map of the current position, so as to determine the mark object information of the identified preset mark object in the map navigation engine, so as to clarify the position of the preset mark object in the navigation map information.
In one example, the map navigation engine may be a map navigation SDK, and when performing matching, the preset landmark object may be matched with the map navigation SDK of the panoramic map based on the current location to obtain the determined landmark object information.
In step S35, if a trigger instruction for the landmark object association information of the preset landmark object is detected, the token object association information associated with the preset landmark object is requested to be acquired from the cloud server based on the landmark object information, and the token object association information acquired from the cloud server is displayed.
In the embodiment of the present disclosure, in the process of executing the navigation function by the camera application, if it is detected that the user triggers the trigger instruction of the preset mark object association information, the feature that the user needs to know the associated information of the mark object is represented, and a request may be sent by the cloud server to obtain the mark object association information associated with the preset mark object. And displaying the acquired mark object associated information associated with the preset mark object in the currently displayed navigation map information. Therefore, the user can quickly acquire the needed known mark object information without jumping to other applications.
In an embodiment, in order to avoid the complexity of information displayed by the navigation map information and be not beneficial to a user to browse the navigation guide information, when the mark object associated information associated with the preset mark object is displayed in the navigation guide information, the preset mark object can be marked by using an identification symbol or a specific name, so that the information irrelevant to the route is converged as much as possible, and the simplicity and cleanliness of the navigation map information are ensured. In one example, the identifier may be a scaled-down micro-scale design, a triangle, a drop-shape, or the like.
In an implementation scenario, a trigger instruction for triggering and displaying of the marker object information by a user may be determined based on that the user keeps a touch state on the marker object information within a first time threshold, so that the marker object information is prevented from being displayed by mistake when a user mistaken touch event occurs, and the use experience of the user is prevented from being affected. For example: the first time threshold may be 3 seconds(s). And when the user clicks and displays a certain building in the navigation map information for more than 3s, triggering and displaying a trigger instruction of the mark object information.
In another implementation scenario, the camera application acquires the preview image through an image capture component in the terminal. When a trigger instruction for triggering display of the mark object information by the user is determined, the determination can be performed based on the fact that the user aims at the mark object in the real scene through the image acquisition component and keeps a focusing state within the second time threshold, and further the needed mark object information can be directly acquired without manual control of the user. For example: the second time threshold may be 3 seconds(s). When the user focuses the image acquisition part of the camera application on a certain building for more than 3s, a trigger instruction for displaying the mark object information is triggered. The first time threshold and the second time threshold may be the same time threshold or different time thresholds, and there may be no direct or indirect connection between the two.
In step S36, navigation map information determined by the map navigation engine based on the preset mark object and the target address is acquired.
In step S37, navigation guidance information is displayed on a preview interface of the camera application based on the navigation map information and the preview image.
Fig. 4 is a flowchart illustrating an image capturing method according to an exemplary embodiment, as shown in fig. 4, the image capturing method is used in a terminal, and includes the following steps S41 through S49.
In step S41, in the case of receiving a navigation instruction that triggers the camera application to execute a navigation function, a preview image captured in real time by the camera application is acquired.
In step S42, scene recognition is performed on the preview image, and a preset flag object included in the current scene is determined based on the scene recognition result.
In step S43, when the preset flag object exists in the preview image, the map navigation engine is called.
In step S44, marker object information determined by the map navigation engine based on matching of the panoramic map of the current location with a preset marker object is acquired.
In step S45, if a trigger instruction for the landmark object association information of the preset landmark object is detected, the token object association information associated with the preset landmark object is requested to be acquired from the cloud server based on the landmark object information, and the token object association information acquired from the cloud server is displayed.
In step S46, navigation map information determined by the map navigation engine based on the preset mark object and the target address is acquired.
In step S47, navigation guidance information is displayed on a preview interface of the camera application based on the navigation map information and the preview image.
In step S48, the virtual navigation figure and the virtual navigation indication mark matching with the navigation guidance information are determined.
In the embodiment of the disclosure, in order to improve the interestingness of displaying the navigation guidance information for the camera application, the virtual navigation image and the virtual navigation indication mark matched with the navigation guidance information can be displayed while the navigation guidance information is displayed, so that the man-machine interaction function of the camera application is enhanced.
In one implementation scenario, the virtual navigation avatar may include a cartoon avatar or an emoticon. The virtual navigation avatar pack may be published in the form of an emoticon or may be purchased by the user by himself to a mall in the camera application. In another implementation scenario, the camera application may further add functions such as game interaction or voice interaction based on the virtual navigation image, so that the user can enhance the interest of the camera application while obtaining the target route, and further improve the user experience based on the game or voice.
In step S49, a virtual navigation avatar and a virtual navigation indicator are displayed in the navigation guidance information displayed on the preview interface of the camera application.
In one example, the virtual navigation image and the virtual navigation indicator may be fused to navigation map information fused with real scene information, i.e. to real road preview information.
Based on the same concept, the embodiment of the present disclosure further provides another image capturing method, which can update and display navigation map information including a target route direction based on a preview image captured in real time.
Fig. 5 is a flowchart illustrating an image capturing method according to an exemplary embodiment, as shown in fig. 5, the image capturing method is used in a terminal, and includes the following steps S51 to S510.
In step S51, in the case of receiving a navigation instruction that triggers the camera application to execute a navigation function, a preview image captured in real time by the camera application is acquired.
In step S52, scene recognition is performed on the preview image, and a preset flag object included in the current scene is determined based on the scene recognition result.
In step S53, when the preset flag object exists in the preview image, the map navigation engine is called.
In step S54, landmark object information determined by the map navigation engine based on matching of the panoramic map of the current location with a preset landmark object is acquired.
In step S55, if a trigger instruction for the landmark object association information of the preset landmark object is detected, the token object association information associated with the preset landmark object is requested to be acquired from the cloud server based on the landmark object information, and the token object association information acquired from the cloud server is displayed.
In step S56, navigation map information determined by the map navigation engine based on the preset flag object and the target address is acquired.
In step S57, navigation guidance information is displayed on a preview interface of the camera application based on the navigation map information and the preview image.
In step S58, the virtual navigation figure and the virtual navigation indication mark matching with the navigation guidance information are determined.
In step S59, a virtual navigation avatar and a virtual navigation indicator are displayed in the navigation guidance information displayed on the preview interface of the camera application.
In step S510, if the scene recognition result obtained by performing scene recognition on the preview image indicates that the current location and the destination address are different locations of the same building, the navigation guidance information is updated and displayed.
In the embodiment of the disclosure, in order to improve the navigation accuracy, in the process of executing the navigation function by the camera application, scene recognition is performed on the preview image acquired in real time, and the real scene where the current user is located is recognized, so that the navigation guidance information is updated and displayed in time according to the current position where the user is located. The scene recognition may include: spatial scene recognition or environmental scene recognition. The spatial scene recognition may include: indoor environments or outdoor environments. The environmental scene recognition may include: a daytime scene or a nighttime scene.
When the current position of the user and the target address are determined to be different positions of the same building according to the scene recognition result, the navigation guide information is re-planned according to the current position, the target route direction is adjusted to the target route direction in an indoor state, and the navigation map information containing the target route direction is updated and displayed, so that the problem that the indoor navigation display is unfriendly in the traditional navigation application is solved. The updated navigation guidance information comprises a navigation route between the entrance and exit position of the building and the target address.
In one implementation scenario, when it is determined that the current location of the user and the target address are different locations of the same building, the user is navigated to an elevator or a stairway of the building according to the current location of the user, and then the route is re-planned.
In another implementation scenario, when it is determined that the current location of the user and the target address are different locations of the same building and belong to switching from an indoor scenario to an outdoor scenario, the target address is temporarily switched to a doorway of the building, and then the navigation map information including the target route direction is re-planned according to the real target address after the user arrives at the doorway of the building.
Based on the same concept, the embodiment of the disclosure also provides another image acquisition method.
Fig. 6 is a flowchart illustrating an image capturing method according to an exemplary embodiment, and as shown in fig. 6, the image capturing method includes the following steps S61 through S64.
In step S61, in the case of receiving a navigation instruction that triggers the camera application to execute a navigation function, a preview image captured in real time by the camera application is acquired.
In step S62, when the preset flag object exists in the preview image, the map navigation engine is called, and the navigation map information determined by the map navigation engine based on the preset flag object and the target address is acquired.
In step S63, navigation guidance information is displayed on a preview interface of the camera application based on the navigation map information and the preview image.
In step S64, if the scene recognition result obtained by performing scene recognition on the preview image acquired by the camera application in real time indicates that the current scene is a night scene, the lighting device is called and turned on.
In the embodiment of the present disclosure, scene recognition is performed on the preview image acquired by the camera application in real time, so that it can be recognized that the current user is in a night scene when the current user executes the navigation function using the camera application. In order to facilitate the user to move to a target address according to navigation guide information displayed by the terminal in a night scene, the lighting device of the terminal is called and started, so that the user can clearly see surrounding road conditions in the night scene, the safety of night travel is improved, and accidents are avoided. In one example, the illumination device may be a flashlight.
Based on the same concept, the embodiment of the disclosure also provides another image acquisition method.
Fig. 7 is a flowchart illustrating an image capturing method according to an exemplary embodiment, as shown in fig. 5, the image capturing method being used in a terminal, including the following steps S71 through S74.
In step S71, in the case of receiving a navigation instruction that triggers the camera application to execute a navigation function, a preview image captured in real time by the camera application is acquired.
In step S72, when the preset flag object exists in the preview image, the map navigation engine is called, and the navigation map information determined by the map navigation engine based on the preset flag object and the target address is acquired.
In step S73, the navigation guidance information is displayed on the preview interface of the camera application based on the navigation map information and the preview image.
In step S74, if the time for which the camera application runs in the background is less than the specified time threshold, the camera application is kept executing the navigation function.
In the embodiment of the present disclosure, during the process of executing the navigation function by using the camera application and driving to the target address according to the navigation guidance information, the user may further need the terminal to run another application or execute another function. In order to ensure the navigation requirement of the user and not influence the user to use other applications in the terminal or execute other functions, when the user jumps to other applications or executes other functions in the process of executing the navigation function by using the camera application, the camera application is returned to be operated in the background, and the state of executing the navigation function by the camera application is maintained. And if the running time of the camera application in the background is less than the specified time threshold, keeping the state of the camera application executing the navigation function all the time. In one example, if the time for the camera application to run in the background is less than the specified time threshold and the camera application is running in the foreground again, the navigation state of the camera application is immediately restored.
In an implementation scenario, the lighting device may remain on when the camera application exits performing navigation functions in the background and is in a nighttime scene.
In an embodiment, if the time that the camera application runs in the background is greater than the specified time threshold, it may be characterized that the user does not need to execute the navigation function temporarily, and the camera application may be automatically turned off to save power consumption.
In an implementation scenario, the camera application may be a camera application. An interactive process of a user acquiring navigation map information including a target route direction fused with real scene information through a camera application may be as shown in fig. 8, where fig. 8 shows a navigation interaction diagram according to an exemplary embodiment.
The user sends a navigation request to the camera application, and then triggers the camera application to execute a navigation instruction of a navigation function. The camera application calls a map navigation SDK according to the received navigation instruction and acquires navigation map information containing the target route direction. Through scene recognition, the camera carries out scene recognition by using a preview image acquired in real time, extracts image texture information in the preview image, and then fuses the image texture information and navigation map information to obtain navigation map information which is fused with real scene information and contains a target route direction. And displaying the navigation guidance information which is fused with the real scene information and contains the target route direction, so that a user can preview real road preview information.
Based on the same conception, the embodiment of the disclosure also provides an image acquisition device. The image acquisition device can be applied to a terminal. Wherein the terminal comprises a camera application for image acquisition, the camera application having an image acquisition function and a navigation function,
it is understood that, in order to implement the above functions, the image capturing apparatus provided in the embodiments of the present disclosure includes a hardware structure and/or a software module for performing each function. The disclosed embodiments can be implemented in hardware or a combination of hardware and computer software, in combination with the exemplary elements and algorithm steps disclosed in the disclosed embodiments. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
FIG. 9 is a block diagram illustrating an image capture device according to an exemplary embodiment. Referring to fig. 9, the image capturing apparatus 100 includes an acquisition unit 101, a calling unit 102, and a display unit 103.
The device comprises an acquisition unit 101, a display unit and a control unit, wherein the acquisition unit is used for acquiring a preview image acquired by a camera application in real time under the condition of receiving a navigation instruction for triggering the camera application to execute a navigation function; the navigation instruction comprises a navigation target address.
The calling unit 102 is configured to call a map navigation engine when a preset mark object exists in the preview image, and acquire navigation map information determined by the map navigation engine based on the preset mark object and the target address.
A display unit 103 for displaying navigation guide information on a preview interface of the camera application based on the navigation map information and the preview image
In an embodiment, the image capturing apparatus 100 further comprises: and the identifying unit 104 is used for carrying out scene identification on the preview image and determining a preset mark object included in the current scene based on the scene identification result.
In another embodiment, the obtaining unit 101 is further configured to: and if a trigger instruction of the marker association information of the preset marker object is detected, requesting a cloud server to acquire the marker association information associated with the preset marker based on the marker object information, and displaying the marker association information acquired from the cloud server.
In a further embodiment, the display unit 103 is further configured to: and determining a virtual navigation image matched with the navigation guide information and a virtual navigation indicator. And displaying the virtual navigation image and the virtual navigation indication mark on the navigation guide information displayed on the preview interface of the camera application.
In another embodiment, the image capturing apparatus 100 further includes: the updating unit 105 is configured to update and display navigation guidance information if a scene recognition result obtained by performing scene recognition on a preview image acquired by the camera in real time represents that a current position and a target address are different positions of the same building, where the updated navigation guidance information includes a navigation route between an entrance position and a target address of the building.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 10 is a block diagram illustrating an image capture device 200 according to an exemplary embodiment. For example, the image capturing apparatus 200 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 10, the image acquisition apparatus 200 may include one or more of the following components: a processing component 202, a memory 204, a power component 206, a multimedia component 208, an audio component 210, an interface for input/output (I/O) 212, a sensor component 214, and a communication component 216.
The processing component 202 generally controls the overall operation of the image capture device 200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 202 may include one or more processors 220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 202 can include one or more modules that facilitate interaction between the processing component 202 and other components. For example, the processing component 202 can include a multimedia module to facilitate interaction between the multimedia component 208 and the processing component 202.
The memory 204 is configured to store various types of data to support the operation at the image acquisition apparatus 200. Examples of such data include instructions for any application or method operating on image capture device 200, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 204 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 206 provide power to the various components of image capture device 200. Power components 206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for image capture device 200.
The multimedia component 208 includes a screen that provides an output interface between the image capture device 200 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 208 includes a front facing camera and/or a rear facing camera. When the image capturing apparatus 200 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 210 is configured to output and/or input audio signals. For example, the audio component 210 includes a Microphone (MIC) configured to receive an external audio signal when the image capture device 200 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 204 or transmitted via the communication component 216. In some embodiments, audio component 210 also includes a speaker for outputting audio signals.
The I/O interface 212 provides an interface between the processing component 202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 214 includes one or more sensors for providing various aspects of status assessment for the image capture device 200. For example, the sensor assembly 214 may detect an open/closed state of the image capturing device 200, the relative positioning of components, such as a display and keypad of the image capturing device 200, the sensor assembly 214 may also detect a change in position of the image capturing device 200 or a component of the image capturing device 200, the presence or absence of user contact with the image capturing device 200, the orientation or acceleration/deceleration of the image capturing device 200, and a change in temperature of the image capturing device 200. The sensor assembly 214 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 216 is configured to facilitate wired or wireless communication between the image capture device 200 and other devices. The image capture device 200 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 216 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the image capturing apparatus 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as memory 204 comprising instructions, executable by processor 220 of image acquisition device 200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is further understood that the use of "a plurality" in this disclosure means two or more, as other terms are analogous. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like are used to describe various information and that such information should not be limited by these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the terms "first," "second," etc. are used interchangeably throughout. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further understood that, unless otherwise specified, "connected" includes direct connections between the two without the presence of other elements, as well as indirect connections between the two with the presence of other elements.
It is further to be understood that while operations are depicted in the drawings in a particular order, this is not to be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. An image acquisition method applied to a terminal including a camera application having an image acquisition function, the image acquisition method comprising:
acquiring a preview image acquired by the camera application in real time under the condition of receiving a navigation instruction triggering the camera application to execute a navigation function; the navigation instruction comprises a navigation target address;
under the condition that a preset mark object exists in the preview image, a map navigation engine is called, and navigation map information determined by the map navigation engine based on the preset mark object and the target address is obtained;
and displaying navigation guide information on a preview interface of the camera application based on the navigation map information and the preview image.
2. The image capturing method of claim 1, wherein in the case where a predetermined marker object is present in the image, before the map navigation engine is invoked, the image capturing method further comprises:
and carrying out scene recognition on the preview image, and determining a preset mark object included in the current scene based on a scene recognition result.
3. The image capturing method according to claim 2, wherein after the map navigation engine is invoked in a case where a predetermined marker object exists in the preview image, the image capturing method further comprises:
acquiring mark object information determined by the map navigation engine based on the matching of the panoramic map of the current position and a preset mark object;
and if a trigger instruction of mark object association information of a preset mark object is detected, requesting to acquire mark object association information associated with the preset mark object from a cloud server based on the mark object information, and displaying the mark object association information acquired from the cloud server.
4. The image capturing method according to any one of claims 1 to 3, characterized in that the image capturing method further includes:
determining a virtual navigation image and a virtual navigation indication mark which are matched with the navigation guide information;
and displaying the virtual navigation image and the virtual navigation indication mark in navigation guide information displayed on a preview interface of the camera application.
5. The image capturing method according to any one of claims 1 to 4, characterized in that the image capturing method further includes:
and if a scene recognition result obtained by performing scene recognition on the preview image represents that the current position and the target address are different positions of the same building, updating and displaying the navigation guidance information, wherein the updated navigation guidance information comprises a navigation route between the entrance and exit position of the building and the target address.
6. An image capturing apparatus applied to a terminal including a camera application having an image capturing function, the image capturing apparatus comprising:
the camera application comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a preview image acquired by the camera application in real time under the condition of receiving a navigation instruction for triggering the camera application to execute a navigation function; the navigation instruction comprises a navigation target address;
the calling unit is used for calling a map navigation engine under the condition that a preset mark object exists in the preview image, and acquiring navigation map information determined by the map navigation engine based on the preset mark object and the target address;
and the display unit is used for displaying navigation guide information on a preview interface of the camera application based on the navigation map information and the preview image.
7. The image capturing device of claim 6, further comprising:
and the identification unit is used for carrying out scene identification on the preview image and determining a preset mark object included in the current scene based on a scene identification result.
8. The image capturing device according to claim 7, wherein the obtaining unit is further configured to:
acquiring mark object information determined by the map navigation engine based on the matching of the panoramic map of the current position and a preset mark object;
and if a trigger instruction of mark object associated information of a preset mark object is detected, based on the mark object information, requesting a cloud server to acquire mark object associated information associated with the preset mark object, and displaying the mark object associated information acquired from the cloud server.
9. The image capturing device according to any one of claims 6 to 8, wherein the display unit is further configured to:
determining a virtual navigation image and a virtual navigation indication mark which are matched with the navigation guide information;
and displaying the virtual navigation image and the virtual navigation indication mark in navigation guide information displayed on a preview interface of the camera application.
10. The image capturing apparatus according to any one of claims 6 to 9, wherein the image capturing apparatus further comprises:
and the updating unit is used for updating and displaying the navigation guidance information if a scene recognition result obtained by carrying out scene recognition on the preview image acquired by the camera in real time represents that the current position and the target address are different positions of the same building, wherein the updated navigation guidance information comprises a navigation route between the entrance and exit position of the building and the target address.
11. An image acquisition apparatus, comprising:
a memory to store instructions; and
a processor for invoking the memory-stored instructions to perform the image acquisition method of any of claims 1-5.
12. A computer readable storage medium having stored therein instructions which, when executed by a processor, perform the image acquisition method of any one of claims 1-5.
CN202110005582.2A 2021-01-05 2021-01-05 Image acquisition method, image acquisition device and storage medium Active CN114726999B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110005582.2A CN114726999B (en) 2021-01-05 2021-01-05 Image acquisition method, image acquisition device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110005582.2A CN114726999B (en) 2021-01-05 2021-01-05 Image acquisition method, image acquisition device and storage medium

Publications (2)

Publication Number Publication Date
CN114726999A true CN114726999A (en) 2022-07-08
CN114726999B CN114726999B (en) 2023-12-26

Family

ID=82234999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110005582.2A Active CN114726999B (en) 2021-01-05 2021-01-05 Image acquisition method, image acquisition device and storage medium

Country Status (1)

Country Link
CN (1) CN114726999B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103471580A (en) * 2012-06-06 2013-12-25 三星电子株式会社 Method for providing navigation information, mobile terminal, and server
CN108090126A (en) * 2017-11-14 2018-05-29 维沃移动通信有限公司 Image processing method, device and mobile terminal, image-recognizing method and server
CN108195390A (en) * 2017-12-29 2018-06-22 北京安云世纪科技有限公司 A kind of air navigation aid, device and mobile terminal
CN108491485A (en) * 2018-03-13 2018-09-04 北京小米移动软件有限公司 Information cuing method, device and electronic equipment
CN108596095A (en) * 2018-04-24 2018-09-28 维沃移动通信有限公司 A kind of information processing method and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103471580A (en) * 2012-06-06 2013-12-25 三星电子株式会社 Method for providing navigation information, mobile terminal, and server
CN108090126A (en) * 2017-11-14 2018-05-29 维沃移动通信有限公司 Image processing method, device and mobile terminal, image-recognizing method and server
CN108195390A (en) * 2017-12-29 2018-06-22 北京安云世纪科技有限公司 A kind of air navigation aid, device and mobile terminal
CN108491485A (en) * 2018-03-13 2018-09-04 北京小米移动软件有限公司 Information cuing method, device and electronic equipment
CN108596095A (en) * 2018-04-24 2018-09-28 维沃移动通信有限公司 A kind of information processing method and mobile terminal

Also Published As

Publication number Publication date
CN114726999B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN110662083B (en) Data processing method and device, electronic equipment and storage medium
CN110377365B (en) Method and device for showing small program
EP3150964A1 (en) Navigation method and device
CN111664866A (en) Positioning display method and device, positioning method and device and electronic equipment
EP3147802B1 (en) Method and apparatus for processing information
CN112991553B (en) Information display method and device, electronic equipment and storage medium
CN110891191B (en) Material selection method, device and storage medium
CN107132769B (en) Intelligent equipment control method and device
KR20220149503A (en) Image capturing method and apparatus, electronic device and computer readable storage medium
CN114009003A (en) Image acquisition method, device, equipment and storage medium
EP3644177A1 (en) Input method, device, apparatus, and storage medium
CN105763552B (en) Transmission method, device and system in remote control
CN112146676B (en) Information navigation method, device, equipment and storage medium
CN112950712B (en) Positioning method and device, electronic equipment and storage medium
CN113989469A (en) AR (augmented reality) scenery spot display method and device, electronic equipment and storage medium
CN108398127A (en) A kind of indoor orientation method and device
CN108986803B (en) Scene control method and device, electronic equipment and readable storage medium
CN114726999B (en) Image acquisition method, image acquisition device and storage medium
CN110636377A (en) Video processing method, device, storage medium, terminal and server
CN111078346B (en) Target object display method and device, electronic equipment and storage medium
CN112461245A (en) Data processing method and device, electronic equipment and storage medium
CN107679123B (en) Picture naming method and device
CN112506393B (en) Icon display method and device and storage medium
CN113132531B (en) Photo display method and device and storage medium
CN112822389B (en) Photograph shooting method, photograph shooting device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant