CN107402735B - Display method, system, electronic device, and non-volatile storage medium - Google Patents

Display method, system, electronic device, and non-volatile storage medium Download PDF

Info

Publication number
CN107402735B
CN107402735B CN201710641469.7A CN201710641469A CN107402735B CN 107402735 B CN107402735 B CN 107402735B CN 201710641469 A CN201710641469 A CN 201710641469A CN 107402735 B CN107402735 B CN 107402735B
Authority
CN
China
Prior art keywords
image
position information
image data
target
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710641469.7A
Other languages
Chinese (zh)
Other versions
CN107402735A (en
Inventor
汪岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201710641469.7A priority Critical patent/CN107402735B/en
Publication of CN107402735A publication Critical patent/CN107402735A/en
Application granted granted Critical
Publication of CN107402735B publication Critical patent/CN107402735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Abstract

A method of displaying, the method comprising: acquiring first position information; acquiring target image data based on the first position information, wherein the target image data comprises image data based on the first position information captured earlier than the current time; presenting the target image data in a manner that is adapted to a current real environment image based on the first location information.

Description

Display method, system, electronic device, and non-volatile storage medium
Technical Field
The present invention relates to the field of display, and in particular, to a display method, a display system, an electronic device, and a non-volatile storage medium.
Background
In daily life, when people observe a real environment with eyes or by means of a display tool, due to the limitation of the real environment, such as weather and fog, the visibility is reduced, or due to the phenomenon that objects are shielded, the real environment is not seen clearly by users. Especially in the aspect of traffic driving, the vision of a driver is obstructed, so that the user cannot see clearly, the judgment and observation of the driver are influenced, the driver is made to generate illusion, the experience effect of the user is seriously reduced, and even serious accidents are caused.
It is therefore desirable to provide a solution to the above problems.
Disclosure of Invention
The embodiment of the invention provides a display method, a display system, electronic equipment and a nonvolatile storage medium, which can help a user to effectively observe and identify a current real environment so that the user can make correct judgment aiming at the current real environment.
According to an aspect of the present invention, there is provided a display method including: acquiring first position information; acquiring target image data based on the first position information, wherein the target image data comprises image data based on the first position information captured earlier than the current time; presenting the target image data in a manner that is adapted to a current real environment image based on the first location information.
Furthermore, according to an embodiment of the present invention, the presenting the target image data in a manner adapted to the current real environment image based on the first location information includes: presenting the target image data in a visual area of viewing the current real-environment image, at least partially replacing or superimposing the current real-environment image based on the first location information with the target image data.
Further, according to an embodiment of the present invention, the acquiring target image data based on the first position information includes: acquiring a first image of the current real environment based on the first position information in advance; if at least a part of the first image satisfies a preset condition, target image data based on the first position information captured earlier than the current time is acquired from the second device.
Furthermore, in accordance with an embodiment of the present invention, the presenting the target image data in a manner adapted to the current real environment image based on the first location information includes: determining a target area in the first image, wherein the target area comprises an area in which a target object is blocked or a fuzzy area; replacing the current real environment image at the target area in the first image with the target image data or displaying the target image data at the target area in the first image superimposed with the current real environment image.
Further, according to an embodiment of the present invention, the first position information includes: a location parameter and/or an orientation parameter.
Further, according to an embodiment of the present invention, the acquiring target image data based on the first position information further includes: sending the first location information to a second device; and receiving target image data which is sent by second equipment and is based on the first position information, wherein the target image data is obtained by searching the first position information by the second equipment, or is obtained by carrying out image acquisition on the basis of the first position information by the second equipment.
According to another aspect of the present invention, there is also provided a display method, including: acquiring first position information; and searching and generating target image data which is captured earlier than the current moment and is based on the first position information, wherein the target image data is used for the first equipment to present the target image data in a mode of being matched with the current real environment image based on the first position information.
Further, according to an embodiment of the present invention, the searching for and generating target image data based on the first location information captured earlier than the current time based on the first location information includes: judging whether a first image of the current real environment meets a preset condition or not; and acquiring target image data which is captured earlier than the current moment and is based on the first position information under the condition that at least part of the first image of the current real environment meets a preset condition.
Further, according to an embodiment of the present invention, the searching and generating target image data based on the first position information captured earlier than the current time based on the first position information further includes: determining a target area in the first image, wherein the target area comprises an area in which a target object is blocked or a fuzzy area; and searching the current real environment image at the target area in the first image to generate target image data at the target area.
According to another aspect of the present invention, there is also provided an electronic device including: the positioning device is used for acquiring first position information; a processor adapted to implement instructions; and a memory adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor to: acquiring target image data based on the first position information, wherein the target image data comprises image data based on the first position information captured earlier than the current time; display means for presenting the target image data in a manner adapted to a current real environment image based on the first location information.
According to another aspect of the present invention, there is also provided an electronic device including: a processor adapted to implement instructions; and a memory adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor to: acquiring first position information; and searching and generating target image data which is captured earlier than the current moment and is based on the first position information, wherein the target image data is used for the first equipment to present the target image data in a mode of being matched with the current real environment image based on the first position information.
According to another aspect of the present invention, there is also provided a display system including the above two electronic devices.
According to another aspect of the present invention, there is also provided a non-volatile storage medium readable by a computer, storing computer program instructions which, when executed by the computer, perform the steps of: acquiring first position information; acquiring target image data based on the first position information, wherein the target image data comprises image data based on the first position information captured earlier than the current time; presenting the target image data in a manner that is adapted to a current real environment image based on the first location information.
Through the embodiment of the invention, even if the barrier or other objects are blocked, the target image data of the blocked objects can be obtained based on the first position information, so that the user can be better helped to effectively observe and identify the current real environment, and the user can make correct judgment aiming at the current real environment.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
In the drawings:
FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention;
FIG. 2 is a schematic view of a wearable device according to an embodiment of the invention;
FIG. 3 is a flow chart of a display method according to an embodiment of the invention;
FIG. 4 is a flow chart of a display method according to an embodiment of the invention;
FIG. 5 is a schematic diagram of an electronic device according to an embodiment of the invention;
fig. 6 is a schematic diagram of a communication architecture of a display system according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present invention, an embodiment of an electronic device is provided, where the electronic device may include, but is not limited to: wearable devices or display devices. The wearable device may be a head-mounted electronic device such as a helmet-type, a glasses-type, or a necklace-type, etc. The display device may be a device such as a display screen or screen. The wearable device or display device has a communication element for communicating with other electronic devices to send information to and/or receive information from the other electronic devices. The other electronic device may be, for example, a tablet computer, a mobile phone, or other wearable electronic devices of the same or different kinds, or other display devices of the same or different kinds, and so on.
In particular, in an embodiment, the wearable device or display device and the other electronic devices may communicate as peer devices. Taking the case that the wearable device sends information to other electronic devices as an example, the wearable device can directly send information to other electronic devices, for example, in a broadcast or other form, without the indication of other electronic devices.
In another embodiment, the wearable device or the display device may act as a slave device, communicating with other electronic devices acting as master devices. Also taking the case where the wearable device transmits information to other electronic devices as an example, the wearable device first needs to receive a transmission command from the other electronic devices and, after receiving the transmission command, transmits information to the other electronic devices in response to the transmission command.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the invention. As shown in fig. 1, the electronic device 10 may include:
a positioning device 101 for acquiring first position information;
a processor 103 adapted to implement instructions; and
a memory 105 adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor to:
acquiring target image data based on the first position information, wherein the target image data comprises image data based on the first position information captured earlier than the current time;
display means 107 for presenting the target image data in a manner adapted to the current real environment image based on the first position information.
With the above embodiment of the present invention, when the positioning device 101 acquires the first position information, the processor 103 may acquire the target image data based on the first position information, and then the display device 107 presents the target image data in a manner adapted to the current real environment image based on the first position information. By adopting the method and the device, even if the barrier or other objects are blocked, the target image data of the blocked object can be acquired based on the first position information, so that the user can be better helped to effectively observe and identify the current real environment, and the user can make correct judgment aiming at the current real environment.
It should be noted that, when the electronic device of the present application is a wearable device, the positioning device 101 may be disposed on a fixing unit of the wearable device, and the fixing device is configured to maintain a relative position relationship between the wearable device and a user when the user wears the wearable device. When the electronic device of the present application is a display device, the display device may further include an image acquisition device, such as a camera, and the positioning device 101 may be disposed together with the camera, optionally, the positioning device may be disposed separately from the display device, or may be integrated with the display device.
Typically, the positioning device 101 may be a positioning device such as a GPS based positioning device, or a positioning device based on a base station of a mobile operator network. The positioning device can be used as long as positioning can be realized, and the application does not limit the specific positioning technology.
Taking the wearable device positioning apparatus disposed on the fixing unit of the wearable device as an example, assuming that the wearable device is a wearable glasses, as shown in fig. 2, the fixing unit of the wearable glasses includes but is not limited to: the glasses frame, the nose bridge, the nose support, the pile head, the glasses legs, the hinge, the locking block and the like. The positioning device 101 may be a GPS positioning device, and the GPS positioning device 202 is disposed at a position where the temple is close to the pile head.
It should be noted that, when the electronic device of the present application is a wearable device, the display device 107 may be a display of the wearable device, and when the user wears the wearable device, the current real environment can be observed through the display. When the electronic device of the present application is a display device, the display device 107 may be a display screen of the display device, and the user can view or observe the current real environment image through the display screen.
Taking a wearable device display as an example, as shown in fig. 2, the display device is a lens 204 of wearable glasses. In addition, when the electronic device of the present application is a wearable device, the wearable device may further include an image capture device 206, such as a camera. The image capture device 206 may be disposed on a stationary unit of the wearable device.
Alternatively, as shown in fig. 2, the image capturing device 206 is disposed at the position of the temple close to the pile head, and the GPS positioning device 202 may be disposed together with the image capturing device 206 above the GPS positioning device 202. The image acquisition device only needs to realize image acquisition, and the position of the image acquisition device is not limited in the application.
The positioning device 101 acquires first position information. The first position information may be position information of the image capturing device, and the first position information is used to indicate a position of the image capturing device when the image capturing device captures the current real environment image. Optionally, the first location information includes: a location parameter and/or an orientation parameter.
In the case where the first position information is acquired by the positioning device 101, the processor 103 may acquire target image data based on the first position information, where the target image data includes image data based on the first position information captured earlier than the current time. The processor 103 may be one or more processors, and may be a Central Processing Unit (CPU) or other form of processing unit (including but not limited to a micro processing unit MCU or a processing device such as a programmable logic device FPGA) having data processing capability and/or instruction execution capability. The processor 103 may control other components in the electronic device to perform desired functions based on computer programs stored in the memory 105.
The memory 105 described above may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and executed by the processor 103 to implement the various steps of the embodiments of the disclosure described above. Various application programs and various data, such as an operating state of the display device 107, an operating state of the application program, and the like, may also be stored in the computer-readable storage medium.
It should be noted that the components and structure of the electronic device shown in fig. 1 are only exemplary and not limiting, and the electronic device may have other components and structures as needed, and may include, for example, a communication device or the like.
Further, according to an embodiment of the present invention, the processor 103 may obtain the target image data based on the first position information by: sending the first position information to the second device; and receiving target image data which is sent by the second equipment and is based on the first position information, wherein the target image data is obtained by the second equipment searching for the first position information or is obtained by the second equipment performing image acquisition based on the first position information.
The second device is a device that communicates with the electronic device of the present application. As mentioned above, the second device may be, for example, a tablet computer, a mobile phone, or other wearable electronic devices of the same or different types, or other display devices of the same or different types, and optionally, the second device may also be a server.
If the second device is a server, the second device may obtain reference image data corresponding to the first location information by searching image data corresponding to the location information stored in the database, and further obtain target image data based on the first location information.
The reference image data corresponding to the first position information may be an original sharp image corresponding to the first position information, where each object in the original sharp image is sharp and clear, and there is no area where the object is blocked or blurred. The above-mentioned objects include, but are not limited to: other than living people, animals, etc., objects such as buildings, trees, etc. The blurred region may be a region in which the brightness, and/or the sharpness, and/or the recognition of a certain region of the image is lower than a preset threshold, the number of the blurred regions may be one or more, and the preset threshold may be preset.
When the second device acquires the reference image data corresponding to the first position information, the target image data based on the first position information can be further obtained. The target image data may be reference image data of an occluded object or reference image data of a blurred region. Optionally, the target image data may further include warning information of reference image data of an occluded object or reference image data of a blurred region, where the warning information is used to notify a user of a potential danger.
In an optional embodiment, in the case that the second device acquires the reference image data corresponding to the first position information, obtaining the target image data based on the first position information may be implemented as follows: the first image of the current real environment image based on the first position information acquired by the image acquisition device 206 is compared with the reference image, the masked object image in the first image of the current real environment image or the image in the blurred area is discriminated, and then image data and warning information of the masked object image or the image in the blurred area are generated.
If the second device is other wearable devices or display devices of the same or different types, the second device may acquire an image based on the first position information to obtain target image data. The image acquisition may be an image acquisition earlier than the current time. The image acquisition device of the second device acquires images earlier than the current time, can acquire image data corresponding to each position information, can further acquire reference image data corresponding to the first position information acquired earlier than the current time image acquisition, and further can acquire target image data based on the first position information.
Further, according to an embodiment of the present invention, the acquiring, by the processor 103, the target image data based on the first position information may further include: acquiring a first image of a current real environment based on first position information in advance; if at least a part of the first image satisfies a preset condition, target image data based on the first position information captured earlier than the current time is acquired from the second device. At least part of the first image satisfies a preset condition, which may be that an object in the first image is blocked or a blurred region exists. When the first image of the current real environment has an object blocked or a fuzzy area exists, the target image data can be acquired from the second device.
Further, the second device may further determine whether or not image data at the target region exists in the first image based on the history data, and generate target image data at the target region in a case where it is determined that the image data at the target region exists in the first image. The history data may be a reference image based on the first location information uploaded by another user.
Furthermore, according to an embodiment of the present invention, the display device 107 presents the target image data in a manner adapted to the current real environment image based on the first position information.
It should be noted that, when the electronic device is a wearable device, such as wearable glasses, optionally, the wearable glasses may be AR glasses. The display device 107, such as a spectacle lens, displays an image of the current real environment, not an image captured by the camera, but directly viewed through the spectacle lens, and the display device 107 presents the target image data in a manner adapted to the current real environment image. In another embodiment, for example, in the case that the electronic device is a display device, the image of the current real environment displayed by the display device 107 is generally an image captured by a camera, and the display device 107 presents the target image data in a manner adapted to the current real environment image. The present application does not limit the current real environment image displayed on the display device 107 as long as the current real environment image can be displayed.
In an alternative embodiment, the presenting of the target image data by the display device 107 in a manner adapted to the current real environment image based on the first position information may comprise: presenting the target image data in a visual area of the viewing of the current real environment image, at least partially replacing or superimposing the current real environment image based on the first position information with the target image data.
The visual region may be a visual region that can be observed by the human eye through the display device 107. At least partially replacing or superimposing the current real environment image based on the first position information with the target image data, in particular: since the target image data is the image data with the object being blocked or the blurred region and the alarm information thereof, the reference image data of the blocked object or the blurred region can be substituted or superimposed on the image with the object being blocked or the blurred region in the current real environment image. For example, the current real environment a object is occluded, and the occluded a object image in the current real environment image may be replaced or superimposed with the reference a' image of the a object in the reference image.
Optionally, in a case where the target image data is at least partially substituted or superimposed on the current real environment image based on the first position information, a target area in the first image of the current real environment image needs to be determined, where the target area includes an area where the target object is blocked or a blurred area. The above-mentioned determination of the target area in the first image of the current real environment image may be performed in the second device, or may be performed in the electronic device. In the case where the target region in the first image of the current real-environment image is determined, the display device 107 replaces the current real-environment image at the target region in the first image with the target image data or displays the target image data at the target region in the first image in superimposition with the current real-environment image. Such target objects include, but are not limited to: other than living people, animals, etc., objects such as buildings, trees, etc.
It should be noted that the target image data of the present invention may also include image data of living people and animals and warning information thereof. The living human or animal identification can be obtained by detecting the contour of the human or animal body. In the prior art, the detection technology for human and animals is mature, and the detection technology is not redundantly repeated by the invention.
With the present invention, when the positioning device 101 acquires the first position information, the processor 103 may acquire the target image data based on the first position information, and then the display device 107 presents the target image data in a manner adapted to the current real environment image based on the first position information. Therefore, even if an obstacle or other objects are occluded in the current real environment image, the target image data of the occluded object can be acquired based on the first position information, so that the user is better helped to effectively observe and recognize the current real environment, and the user can make correct judgment for the current real environment.
According to an embodiment of the present invention, there is also provided a display method applied to an electronic apparatus such as the electronic apparatus 10 described above. It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein. As shown in fig. 3, the method may include the steps of:
step S301, acquiring first position information;
step S303 of acquiring target image data based on the first position information, wherein the target image data includes image data based on the first position information captured earlier than the current time;
step S305, presenting the target image data in a manner adapted to the current real environment image based on the first position information.
Furthermore, according to an embodiment of the present invention, presenting the target image data in a manner adapted to the current real environment image based on the first location information may include: presenting the target image data in a visual area of the viewing of the current real environment image, at least partially replacing or superimposing the current real environment image based on the first position information with the target image data.
Further, according to an embodiment of the present invention, acquiring the target image data based on the first position information may include: acquiring a first image of a current real environment based on first position information in advance; if at least a part of the first image satisfies a preset condition, target image data based on the first position information captured earlier than the current time is acquired from the second device.
Furthermore, according to an embodiment of the present invention, wherein presenting the target image data in a manner adapted to the current real environment image based on the first location information may further include: determining a target area in the first image, wherein the target area comprises an area in which a target object is blocked or a fuzzy area; the current real-environment image at the target area in the first image is replaced with the target image data, or the target image data is displayed at the target area in the first image so as to be superimposed on the current real-environment image.
Further, according to an embodiment of the present invention, the first position information includes: a location parameter and/or an orientation parameter.
Further, according to an embodiment of the present invention, acquiring the target image data based on the first position information further includes: sending the first position information to the second device; and receiving target image data which is sent by the second equipment and is based on the first position information, wherein the target image data is obtained by the second equipment searching for the first position information or is obtained by the second equipment performing image acquisition based on the first position information.
Through the embodiment of the invention, under the condition of acquiring the first position information, the target image data can be acquired based on the first position information, and then the target image data is presented in a mode of being matched with the current real environment image based on the first position information. By adopting the method and the device, even if the barrier or other objects are blocked, the target image data of the blocked object can be acquired based on the first position information, so that the user can be better helped to effectively observe and identify the current real environment, and the user can make correct judgment aiming at the current real environment.
According to an embodiment of the present invention, there is also provided a display method applied to other electronic devices that communicate with the electronic device of the present invention. As shown in fig. 4, the method may include the steps of:
step S402, acquiring first position information;
and S404, searching and generating target image data which is captured earlier than the current moment and is based on the first position information, wherein the target image data is used for the first device to present the target image data in a mode of being matched with the current real environment image based on the first position information.
The first device described above may be, for example, the electronic device 10 of the present invention.
Further, according to an embodiment of the present invention, searching for and generating target image data based on first position information captured earlier than a current time based on the first position information includes: judging whether a first image of the current real environment meets a preset condition or not; in a case where it is determined that at least a part of the first image of the current real environment satisfies a preset condition, target image data based on the first position information captured earlier than the current time is acquired.
Further, according to an embodiment of the present invention, searching for and generating target image data based on the first position information captured earlier than the current time further includes: determining a target area in the first image, wherein the target area comprises an area in which a target object is blocked or a fuzzy area; the current real environment image at the target area in the first image is looked up to generate target image data at the target area.
With the above-described embodiment of the present invention, in the case of acquiring the first location information, the target image data based on the first location information captured earlier than the current time can be searched and generated based on the first location information, and then the electronic device 10 presents the target image data in a manner of being adapted to the current real environment image based on the first location information. By adopting the method and the device, even if the barrier or other objects are blocked, the target image data of the blocked object can be acquired based on the first position information, so that the user can be better helped to effectively observe and identify the current real environment, and the user can make correct judgment aiming at the current real environment.
According to an embodiment of the present invention, there is also provided an electronic device 20, which is another electronic device communicating with the electronic device of the present invention, as shown in fig. 5, where the electronic device 20 includes:
a processor 501 adapted to implement instructions; and
a memory 503 adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor:
acquiring first position information;
and searching and generating target image data which is captured earlier than the current moment and is based on the first position information, wherein the target image data is used for the first equipment to present the target image data in a mode of adapting to the current real environment image based on the first position information.
By the embodiment of the invention, under the condition of acquiring the first position information, the target image data which is captured earlier than the current time and is based on the first position information can be searched and generated based on the first position information, and then the first device presents the target image data in a mode of being matched with the current real environment image based on the first position information. By adopting the method and the device, even if the barrier or other objects are blocked, the target image data of the blocked object can be acquired based on the first position information, so that the user can be better helped to effectively observe and identify the current real environment, and the user can make correct judgment aiming at the current real environment.
According to an embodiment of the present invention, there is also provided a display system including the electronic device 10 and the electronic device 20. Fig. 6 is a schematic diagram of a communication architecture of a display system according to an embodiment of the present invention. As shown in fig. 6, the environment may include a hardware environment and a network environment.
The hardware environment comprises: electronic device 10, electronic device 20. The electronic device 20 may be one or more, and the electronic device 20 may include a plurality of processing nodes for processing the first location information and/or the first image of the current real environment transmitted from the electronic device 10. The plurality of processing nodes may be integrated externally. For example, the electronic apparatus 20 is connected to the electronic apparatus 10 via a network.
Typically, the network may be the internet or a mobile data network including, but not limited to: global system for mobile communications (GSM) networks, Code Division Multiple Access (CDMA) networks, Wideband Code Division Multiple Access (WCDMA) networks, Long Term Evolution (LTE) communication networks, and the like. Different types of communication networks may be operated by different operators. The type of communication network does not constitute a limitation on the embodiments of the present invention.
The electronic device 20 may be a server-side electronic device or an electronic device 10-side electronic device, which is not limited in the embodiment of the present invention.
Through the embodiment of the invention, even if the barrier or other objects are blocked, the target image data of the blocked objects can be obtained based on the first position information, so that the user can be better helped to effectively observe and identify the current real environment, and the user can make correct judgment aiming at the current real environment.
There is also provided, in accordance with an embodiment of the present invention, a non-volatile storage medium readable by a computer, storing computer program instructions, which when executed by the computer, perform the steps of: acquiring first position information; acquiring target image data based on the first position information, wherein the target image data comprises image data based on the first position information captured earlier than the current time; presenting the target image data in a manner that is adapted to a current real environment image based on the first location information.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a non-volatile storage medium (such as ROM/RAM, magnetic disk, optical disk) readable by a computer and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
It should be noted that, for the sake of simplicity, the above-mentioned embodiments of the method and the electronic device are all described as a series of acts or a combination of modules, but those skilled in the art should understand that the present invention is not limited by the described order of acts or the connection of modules, because some steps may be performed in other orders or simultaneously and some modules may be connected in other manners according to the present invention.
Furthermore, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments, and that the above-described embodiments are numbered merely for purposes of illustration and that the acts and modules involved are not necessarily required for the invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed technical contents can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. A method of displaying, the method comprising:
acquiring first position information;
acquiring target image data based on the first position information, wherein the target image data comprises image data based on the first position information captured earlier than the current time;
presenting the target image data in a manner that is adapted to a current real environment image based on the first location information;
wherein the acquiring target image data based on the first position information comprises:
acquiring a first image of the current real environment based on the first position information in advance;
acquiring target image data based on the first position information captured earlier than a current time if an occluded target object exists or a blurred region exists in at least part of the first image compared to a reference image based on the first position information;
the reference image is an image in which each object is clear and definite and no object is in a blocked or blurred area;
wherein the acquiring target image data based on the first position information further comprises:
sending the first location information to a second device;
receiving target image data which is sent by second equipment and is based on the first position information, wherein the target image data is obtained by the second equipment searching for the first position information or is obtained by the second equipment performing image acquisition based on the first position information;
the target image data comprises a reference image and alarm information of an image of an occluded object or an image of a blurred region, which are generated by the second device by comparing a first image of a current real environment image based on the first position information with the reference image to acquire an image of the occluded object or an image of the blurred region in the first image of the current real environment image.
2. The display method of claim 1, the presenting the target image data in a manner that is adapted to a current real environment image based on the first location information comprising:
presenting the target image data in a visual area of viewing the current real-environment image, at least partially replacing or superimposing the current real-environment image based on the first location information with the target image data.
3. The display method according to claim 1, the acquiring target image data based on the first position information comprising:
acquiring a first image of the current real environment based on the first position information in advance;
if at least a part of the first image satisfies a preset condition, target image data based on the first position information captured earlier than the current time is acquired from the second device.
4. The display method according to claim 3, wherein said presenting the target image data in a manner adapted to a current real environment image based on the first location information comprises:
determining a target area in the first image, wherein the target area comprises an area in which a target object is blocked or a fuzzy area;
replacing the current real environment image at the target area in the first image with the target image data or displaying the target image data at the target area in the first image superimposed with the current real environment image.
5. The display method according to claim 1, the first position information comprising: a location parameter and/or an orientation parameter.
6. A method of displaying, the method comprising:
acquiring first position information from a first device;
searching and generating target image data which is captured earlier than the current moment and is based on the first position information, wherein the target image data is used for a first device to present the target image data in a mode of being matched with a current real environment image based on the first position information;
wherein the searching for and generating target image data based on the first position information captured earlier than the current time comprises:
comparing a first image of a current real environment with a reference image based on the first location information;
acquiring target image data based on the first position information captured earlier than a current time if an occluded target object exists or a blurred region exists in at least part of the first image compared to a reference image based on the first position information;
the reference image is an image in which each object is clear and definite and no object is in a blocked or blurred area;
the target image data comprises a reference image and alarm information of an image of an occluded object or an image of a blurred region, which are generated by comparing a first image of a current real environment image based on first position information with the reference image and acquiring the image of the occluded object or the image of the blurred region in the first image of the current real environment image.
7. The display method according to claim 6, wherein the search for and generation of target image data based on the first position information captured earlier than a current time based on the first position information includes:
judging whether a first image of the current real environment meets a preset condition or not;
and acquiring target image data which is captured earlier than the current moment and is based on the first position information under the condition that at least part of the first image of the current real environment meets a preset condition.
8. The display method according to claim 7, the search for and generation of target image data based on the first position information captured earlier than a current time based on the first position information further comprising:
determining a target area in the first image, wherein the target area comprises an area in which a target object is blocked or a fuzzy area;
and searching the current real environment image at the target area in the first image to generate target image data at the target area.
9. An electronic device, comprising:
the positioning device is used for acquiring first position information;
a processor adapted to implement instructions; and
a memory adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor:
acquiring target image data based on the first position information, wherein the target image data comprises image data based on the first position information captured earlier than the current time;
wherein the acquiring target image data based on the first position information comprises:
acquiring a first image of the current real environment based on the first position information in advance;
acquiring target image data based on the first position information captured earlier than a current time if an occluded target object exists or a blurred region exists in at least part of the first image compared to a reference image based on the first position information;
the reference image is an image in which each object is clear and definite and no object is in a blocked or blurred area;
display means for presenting the target image data in a manner adapted to a current real environment image based on the first location information;
wherein the acquiring target image data based on the first position information further comprises:
sending the first location information to a second device;
receiving target image data which is sent by second equipment and is based on the first position information, wherein the target image data is obtained by the second equipment searching for the first position information or is obtained by the second equipment performing image acquisition based on the first position information;
the target image data comprises a reference image and alarm information of an image of an occluded object or an image of a blurred region, which are generated by the second device by comparing a first image of a current real environment image based on the first position information with the reference image to acquire an image of the occluded object or an image of the blurred region in the first image of the current real environment image.
10. An electronic device, comprising:
a processor adapted to implement instructions; and
a memory adapted to store a plurality of instructions, the instructions adapted to be loaded and executed by the processor:
acquiring first position information from a first device;
searching and generating target image data which is captured earlier than the current moment and is based on the first position information, wherein the target image data is used for a first device to present the target image data in a mode of being matched with a current real environment image based on the first position information;
wherein the searching for and generating target image data based on the first position information captured earlier than the current time comprises:
comparing a first image of a current real environment with a reference image based on the first location information;
acquiring target image data based on the first position information captured earlier than a current time if an occluded target object exists or a blurred region exists in at least part of the first image compared to a reference image based on the first position information;
the reference image is an image in which each object is clear and definite and no object is in a blocked or blurred area;
the target image data comprises a reference image and alarm information of an image of an occluded object or an image of a blurred region, which are generated by comparing a first image of a current real environment image based on first position information with the reference image and acquiring the image of the occluded object or the image of the blurred region in the first image of the current real environment image.
11. A display system comprising the electronic device of claim 9 and the electronic device of claim 10.
12. A non-transitory storage medium readable by a computer, storing computer program instructions which, when executed by the computer, perform the steps of:
acquiring first position information;
acquiring target image data based on the first position information, wherein the target image data comprises image data based on the first position information captured earlier than the current time;
presenting the target image data in a manner that is adapted to a current real environment image based on the first location information;
wherein the acquiring target image data based on the first position information comprises:
acquiring a first image of the current real environment based on the first position information in advance;
acquiring target image data based on the first position information captured earlier than a current time if an occluded target object exists or a blurred region exists in at least part of the first image compared to a reference image based on the first position information;
the reference image is an image in which each object is clear and definite and no object is in a blocked or blurred area;
wherein the acquiring target image data based on the first position information further comprises:
sending the first location information to a second device;
receiving target image data which is sent by second equipment and is based on the first position information, wherein the target image data is obtained by the second equipment searching for the first position information or is obtained by the second equipment performing image acquisition based on the first position information;
the target image data comprises a reference image and alarm information of an image of an occluded object or an image of a blurred region, which are generated by the second device by comparing a first image of a current real environment image based on the first position information with the reference image to acquire an image of the occluded object or an image of the blurred region in the first image of the current real environment image.
CN201710641469.7A 2017-07-31 2017-07-31 Display method, system, electronic device, and non-volatile storage medium Active CN107402735B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710641469.7A CN107402735B (en) 2017-07-31 2017-07-31 Display method, system, electronic device, and non-volatile storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710641469.7A CN107402735B (en) 2017-07-31 2017-07-31 Display method, system, electronic device, and non-volatile storage medium

Publications (2)

Publication Number Publication Date
CN107402735A CN107402735A (en) 2017-11-28
CN107402735B true CN107402735B (en) 2021-03-19

Family

ID=60401804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710641469.7A Active CN107402735B (en) 2017-07-31 2017-07-31 Display method, system, electronic device, and non-volatile storage medium

Country Status (1)

Country Link
CN (1) CN107402735B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020030156A1 (en) * 2018-08-10 2020-02-13 广东虚拟现实科技有限公司 Image processing method, terminal device, and computer readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079811A (en) * 2014-07-24 2014-10-01 广东欧珀移动通信有限公司 Method and device for filtering out obstacles during photographing
CN104272345A (en) * 2012-05-18 2015-01-07 日产自动车株式会社 Display device for vehicle, display method for vehicle, and display program for vehicle
CN104539868A (en) * 2014-11-24 2015-04-22 联想(北京)有限公司 Information processing method and electronic equipment
CN106357804A (en) * 2016-10-31 2017-01-25 北京小米移动软件有限公司 Image processing method, electronic equipment and cloud server
CN106454093A (en) * 2016-10-18 2017-02-22 北京小米移动软件有限公司 Image processing method, image processing device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9514710B2 (en) * 2014-03-31 2016-12-06 International Business Machines Corporation Resolution enhancer for electronic visual displays

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104272345A (en) * 2012-05-18 2015-01-07 日产自动车株式会社 Display device for vehicle, display method for vehicle, and display program for vehicle
CN104079811A (en) * 2014-07-24 2014-10-01 广东欧珀移动通信有限公司 Method and device for filtering out obstacles during photographing
CN104539868A (en) * 2014-11-24 2015-04-22 联想(北京)有限公司 Information processing method and electronic equipment
CN106454093A (en) * 2016-10-18 2017-02-22 北京小米移动软件有限公司 Image processing method, image processing device and electronic equipment
CN106357804A (en) * 2016-10-31 2017-01-25 北京小米移动软件有限公司 Image processing method, electronic equipment and cloud server

Also Published As

Publication number Publication date
CN107402735A (en) 2017-11-28

Similar Documents

Publication Publication Date Title
JP6364952B2 (en) Information processing apparatus, information processing system, and information processing method
US11263769B2 (en) Image processing device, image processing method, and image processing system
KR20160048140A (en) Method and apparatus for generating an all-in-focus image
RU2749643C1 (en) Head-mounted display device and method performed by them
EP3435346A1 (en) Spectacle-type wearable terminal, and control method and control program for same
JP2018160219A (en) Moving route prediction device and method for predicting moving route
CN111010547A (en) Target object tracking method and device, storage medium and electronic device
CN109587441B (en) Method for directly accessing video data stream and data between devices in video monitoring system
TW201448585A (en) Real time object scanning using a mobile phone and cloud-based visual search engine
WO2018087462A1 (en) Individual visual immersion device for a moving person with management of obstacles
CN111191507A (en) Safety early warning analysis method and system for smart community
WO2015028294A1 (en) Monitoring installation and method for presenting a monitored area
CN107402735B (en) Display method, system, electronic device, and non-volatile storage medium
CN110678353A (en) Externally displaying captured images of a vehicle interior in VR glasses
KR101906560B1 (en) Server, method, wearable device for supporting maintenance of military apparatus based on correlation data between object in augmented reality
KR101473671B1 (en) Method and apparatus for detection of phishing site by image comparison
KR101700651B1 (en) Apparatus for tracking object using common route date based on position information
CN110881141B (en) Video display method and device, storage medium and electronic device
WO2015198284A1 (en) Reality description system and method
CN113010009B (en) Object sharing method and device
US20220019779A1 (en) System and method for processing digital images
CN110300290B (en) Teaching monitoring management method, device and system
CN111862576A (en) Method for tracking suspected target, corresponding vehicle, server, system and medium
CN108230608B (en) Method and terminal for identifying fire
KR101702452B1 (en) A method and a system for providing cctv image applied augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant