CN115134528B - Image display method, related device, server and intelligent glasses - Google Patents

Image display method, related device, server and intelligent glasses Download PDF

Info

Publication number
CN115134528B
CN115134528B CN202210762637.9A CN202210762637A CN115134528B CN 115134528 B CN115134528 B CN 115134528B CN 202210762637 A CN202210762637 A CN 202210762637A CN 115134528 B CN115134528 B CN 115134528B
Authority
CN
China
Prior art keywords
user
image
camera
intelligent glasses
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210762637.9A
Other languages
Chinese (zh)
Other versions
CN115134528A (en
Inventor
黄文强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202210762637.9A priority Critical patent/CN115134528B/en
Publication of CN115134528A publication Critical patent/CN115134528A/en
Application granted granted Critical
Publication of CN115134528B publication Critical patent/CN115134528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an image display method, which comprises the following steps: receiving an image acquired by a first camera sent by intelligent glasses, wherein the first camera is arranged on the intelligent glasses; determining the gazing direction of a user according to the image acquired by the first camera; acquiring an image corresponding to the user's gaze direction, and generating a target image according to the image corresponding to the user's gaze direction; and returning the target image to the intelligent glasses so that the intelligent glasses display the target image to the user. According to the method, intelligent glasses are introduced, a server of the intelligent glasses receiving background generates images comprising different ranges based on machine vision, the images in the different ranges are displayed to a user, the problem that clients needing help cannot be timely perceived due to limited human eye observation range is solved, and the user is helped to timely find the clients needing help.

Description

Image display method, related device, server and intelligent glasses
Technical Field
The application relates to the technical field of computers, in particular to an image display method, an image display device, a server and intelligent glasses.
Background
Banking outlets are service points opened under banking lines and providing financial business services for customers. With the continuous progress of modern technology, self-service equipment equipped in banking outlets is more and more, so that customers do not need to transact all business through a counter, the queuing waiting time of the customers in the banking outlets is reduced, and meanwhile, the pressure of banking staff is also reduced.
However, the customer may need assistance from a staff member in transacting business through the self-service device. When the client quantity of the banking website is large, the range that the staff can observe is limited due to the complex environment of the banking website, and the staff can not timely detect the client needing help, so that the client experience is reduced.
The industry needs to provide a scheme for helping staff to find clients needing help in time and improving client experience.
Disclosure of Invention
The application provides an image display method, the method introduces intelligent glasses, the intelligent glasses receive images which are generated by a background server based on machine vision and contain different ranges, and display the images in the different ranges to a user, so that the problem that clients needing help cannot be perceived in time due to limited human eye observation range is solved, and the client experience is improved. The application also provides a device, a server and intelligent glasses corresponding to the method.
In a first aspect, the present application provides an image display method. The method is applied to the server, and comprises the following steps:
receiving an image acquired by a first camera sent by intelligent glasses, wherein the first camera is arranged on the intelligent glasses;
determining the gazing direction of a user according to the image acquired by the first camera;
acquiring an image corresponding to the user's gaze direction, and generating a target image according to the image corresponding to the user's gaze direction;
and returning the target image to the intelligent glasses so that the intelligent glasses display the target image to a user.
In some possible implementations, the determining, according to the image acquired by the first camera, a gaze direction of the user includes:
determining a center point of the image acquired by the first camera and position information of the user according to the image acquired by the first camera;
and determining the gazing direction of the user according to the central point of the image acquired by the first camera, the position information of the user and the panorama of the environment where the user is located.
In some possible implementations, the panorama of the environment in which the user is located is generated by:
receiving a plurality of images acquired by a plurality of second cameras, wherein the plurality of second cameras are installed in the environment where the user is located;
and generating a panoramic image of the environment where the user is located according to the plurality of images acquired by the plurality of second cameras.
In some possible implementations, the obtaining the image corresponding to the gaze direction of the user, generating the target image according to the image corresponding to the gaze direction of the user, includes:
acquiring images corresponding to the gazing direction of the user according to the images acquired by the second cameras;
and generating a target image according to the image corresponding to the gazing direction of the user and the image range corresponding to the intelligent glasses.
In some possible implementations, the image range corresponding to the smart glasses is determined by:
acquiring an intelligent glasses identifier and an image range attribute input by a user, wherein the image range attribute comprises one or more of image range size and image range shape;
and determining the image range corresponding to the intelligent glasses according to the intelligent glasses identification and the image range attribute.
In some possible implementations, the method further includes:
and amplifying the target image when the user executes the target eye action.
In a second aspect, the present application provides an image display method. The method is applied to intelligent glasses, wherein a first camera is installed on the intelligent glasses, and the method comprises the following steps:
acquiring an image acquired by the first camera, and sending the image acquired by the first camera to a server;
receiving a target image returned by the server, wherein the target image is generated by the server according to the image acquired by the first camera, determining the gazing direction of the user and the image corresponding to the gazing direction;
and displaying the target image to the user.
In some possible implementations, the gaze direction of the user is determined by:
the server determines the center point of the image acquired by the first camera and the position information of the user according to the image acquired by the first camera;
and the server determines the gazing direction of the user according to the central point of the image acquired by the first camera, the position information of the user and the panorama of the environment where the user is located.
In some possible implementations, the panorama of the environment in which the user is located is generated by:
the server receives a plurality of images acquired by a plurality of second cameras, and the plurality of second cameras are installed in the environment where the user is located;
and the server generates a panoramic image of the environment where the user is located according to the images acquired by the second cameras.
In some possible implementations, the target image is generated by:
the server acquires images corresponding to the gazing directions of the users according to the images acquired by the second cameras;
and the server generates a target image according to the image corresponding to the gazing direction of the user and the image range corresponding to the intelligent glasses, so that the intelligent glasses receive the target image.
In some possible implementations, the image range corresponding to the smart glasses is determined by:
acquiring an intelligent glasses identifier and an image range attribute input by a user, wherein the image range attribute comprises one or more of image range size and image range shape;
and determining the image range corresponding to the intelligent glasses according to the intelligent glasses identification and the image range attribute.
In some possible implementations, the method further includes:
when the user executes the target eye action, receiving the target image amplified by the server;
and displaying the amplified target image to the user.
In a third aspect, the present application provides an image display apparatus. The device comprises:
the acquisition module is used for acquiring the image acquired by the first camera and sending the image acquired by the first camera to a server;
the receiving module is used for receiving a target image returned by the server, wherein the target image is generated by the server according to the gaze direction of the user determined by the image acquired by the first camera and the image corresponding to the gaze direction;
and the display module is used for displaying the target image to the user.
In some possible implementations, the gaze direction of the user is determined by:
the server determines the center point of the image acquired by the first camera and the position information of the user according to the image acquired by the first camera;
and the server determines the gazing direction of the user according to the central point of the image acquired by the first camera, the position information of the user and the panorama of the environment where the user is located.
In some possible implementations, the panorama of the environment in which the user is located is generated by:
the server receives a plurality of images acquired by a plurality of second cameras, and the plurality of second cameras are installed in the environment where the user is located;
and the server generates a panoramic image of the environment where the user is located according to the images acquired by the second cameras.
In some possible implementations, the target image is generated by:
the server acquires images corresponding to the gazing directions of the users according to the images acquired by the second cameras;
and the server generates a target image according to the image corresponding to the gazing direction of the user and the image range corresponding to the intelligent glasses, so that the intelligent glasses receive the target image.
In some possible implementations, the image range corresponding to the smart glasses is determined by:
acquiring an intelligent glasses identifier and an image range attribute input by a user, wherein the image range attribute comprises one or more of image range size and image range shape;
and determining the image range corresponding to the intelligent glasses according to the intelligent glasses identification and the image range attribute.
In some possible implementations, the receiving module is further configured to:
when the user executes the target eye action, receiving the target image amplified by the server;
the display module is further configured to:
and displaying the amplified target image to the user.
In a fourth aspect, the present application provides a server. The server comprises a processor and a memory, the memory having instructions stored therein, the processor executing the instructions to cause the server to perform the method according to the first aspect or any implementation of the first aspect of the present application.
In a fifth aspect, the present application provides a smart glasses. The smart glasses comprise means as claimed in the third aspect or any implementation of the third aspect of the present application to cause the smart glasses to perform the method as claimed in the second aspect or any implementation of the second aspect of the present application.
Further combinations of the present application may be made to provide further implementations based on the implementations provided in the above aspects.
Based on the above description, the technical solution of the present application has the following beneficial effects:
the method is particularly applied to a server, the server firstly receives images collected by a first camera arranged on the intelligent glasses, determines the gazing direction of a user according to the images, then acquires images corresponding to the gazing direction of the user, generates target images, and returns the target images to the intelligent glasses, so that the intelligent glasses display the target images to the user. According to the method, the intelligent glasses are introduced, the server of the intelligent glasses receiving background generates images comprising different ranges based on machine vision, the images in the different ranges are displayed to the user, the problem that clients needing help cannot be timely perceived due to limited human eye observation range is solved, the user is helped to timely find the clients needing help, and the experience of the clients is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present application will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flowchart of an image display method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a server for implementing image display according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present application are shown in the drawings, it is to be understood that the present application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided to provide a more thorough and complete understanding of the present application. It should be understood that the drawings and examples of the present application are for illustrative purposes only and are not intended to limit the scope of the present application.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like herein are merely used for distinguishing between different devices, modules, or units and not for limiting the order or interdependence of the functions performed by such devices, modules, or units.
It should be noted that references to "one" or "a plurality" in this application are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be interpreted as "one or more" unless the context clearly indicates otherwise.
In order to facilitate understanding of the technical scheme of the application, a specific application scenario of the application is described below.
Banking sites opened on-line by banks can provide financial business services for clients, the banking sites generally comprise a plurality of counters, and staff communicate and interface with the clients through the counters to finish business processing. To avoid excessive customer waiting time, banking sites are typically equipped with a relatively large number of staff. In recent years, with intelligent upgrade of banking outlets, more and more businesses can be completed through self-service equipment, for example, customers can complete small-amount withdrawal through a self-service cash dispenser, and complete confirmation of account opening information through a self-service terminal. The popularization of self-service equipment enables customers to do not need to transact all business through a counter, and the number of staff equipped at banking sites is reduced.
However, when a customer handles a service using the self-service device, a situation may occur in which assistance of a worker is required, for example, the elderly may not be proficient in using the self-service device, a place where the customer does not understand what is displayed on the self-service device, or the like. For the situation, the staff should give assistance in time, however, due to the complex environment of banking sites and the small number of staff, the staff may not find the clients needing assistance for the first time due to the problem of visual angle, and the experience of the clients is reduced.
Based on this, the embodiment of the application provides an image display method. The method is particularly applied to a server, the server firstly receives images collected by a first camera arranged on the intelligent glasses, determines the gazing direction of a user according to the images, then acquires images corresponding to the gazing direction of the user, generates target images, and returns the target images to the intelligent glasses, so that the intelligent glasses display the target images to the user. According to the method, the intelligent glasses are introduced, the server of the intelligent glasses receiving background generates images comprising different ranges based on machine vision, the images in the different ranges are displayed to the user, the problem that clients needing help cannot be timely perceived due to limited human eye observation range is solved, the user is helped to timely find the clients needing help, and the experience of the clients is improved.
Next, a detailed description will be given of an image display method provided in an embodiment of the present application with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of an image display method is shown, and the method may be executed by a server, and specifically includes the following steps:
s101: the server receives images collected by the first cameras sent by the intelligent glasses.
In this application embodiment, the user wears intelligent glasses, installs first camera on this intelligent glasses. The first camera is installed towards the outside, and the collected image can be an image seen by a user from a visual angle. For example, the user stands facing a picture, and the image captured by the first camera mounted on the smart glasses may be an image of the picture. That is, the first camera installed on the smart glasses can collect images along with the viewing angle of the user, and the collected images can reflect the movement of the user, such as moving or rotating the head.
S102: and the server determines the gazing direction of the user according to the image acquired by the first camera.
In this embodiment of the present application, the server may first determine, according to an image acquired by the first camera, a center point of the image and location information of the user. For example, when the image acquired by the first camera is a rectangular image, the center point of the image may be the center point of the rectangle. The location information of the user may be the current location of the user.
Then, the server can determine the gazing direction of the user according to the center point of the image, the position information of the user and the panorama of the environment where the user is located. Specifically, the server may determine the location information of the center point by comparing the center point of the image with the panorama of the environment where the user is located, and the direction of the line connecting the location information of the user and the location information of the center point may be determined as the user gazing direction.
In some possible implementations, the panorama of the environment in which the user is located may be generated by: the server receives a plurality of images acquired by the plurality of second cameras, and generates a panoramic image of the environment where the user is located according to the plurality of images acquired by the second cameras. Specifically, the plurality of second cameras can be installed in the environment where the user is located, for example, the plurality of second cameras can be installed at different positions in the environment according to the shooting angles of the second cameras, so that the plurality of second cameras can completely collect the environment where the user is located. And then, the server receives a plurality of images acquired by the plurality of second cameras, compares pictures in each image, only reserves one part aiming at the overlapped pixel points, and splices the plurality of images to form a complete environment image, so as to generate a panoramic image of the environment where the user is located.
S103: the server acquires the image corresponding to the user's gaze direction, and generates a target image according to the image corresponding to the user's gaze direction.
In this embodiment of the present application, the server may acquire, according to a plurality of images acquired by a plurality of second cameras, an image corresponding to a gaze direction of the user. Specifically, after determining the gazing direction of the user, the server may receive a plurality of images acquired by a plurality of second cameras, where the plurality of second cameras may be cameras capable of acquiring an environment of the gazing direction of the user, so that the server acquires images corresponding to the gazing direction of the user. It can be appreciated that the range of the image corresponding to the user's gaze direction is different from the range of the image captured by the first camera on the smart glasses worn by the user, and in some possible implementations, the range of the image corresponding to the user's gaze direction is greater than the range of the image captured by the first camera on the smart glasses worn by the user, so as to display the target image of different ranges for the user.
Then, the server may generate the target image from the image corresponding to the gaze direction of the user and the image range corresponding to the smart glasses. Specifically, the server may obtain an intelligent glasses identifier and an image range attribute that are input in advance by the user, where the intelligent glasses identifier may be an intelligent glasses number, and the image range attribute may include one or more of an image range size and an image range shape, that is, the user may preset a display size and a display shape that meet the requirement of the user, and the server determines an image range corresponding to the intelligent glasses according to parameters input by the user. Then, the server may intercept images satisfying the image range corresponding to the smart glasses from the images corresponding to the gaze direction of the user, and generate the target image.
It should be noted that, the generated target image is generated by the server based on machine vision, and various images in different ranges can be included in the target image, so that the user can observe the image in a larger range through the displayed target image, and the problem that the user cannot timely perceive the client needing help due to the limited range of eyes is solved.
S104: and the server returns the target image to the intelligent glasses so that the intelligent glasses display the target image to a user.
In the embodiment of the application, the server returns the generated target image to the intelligent glasses, so that the intelligent glasses display the target image to the user. It can be understood that the range of the target image displayed to the user can be larger than the range actually observed by the user, so that the user can acquire the information which cannot be captured in the actual observation through the target image, and thus, the user can timely find the client needing assistance.
In some possible implementations, the server may zoom in on the target image when the user performs the target eye motion. Specifically, the smart glasses may be provided with a pressure sensor, when the time for displaying a certain target image to the user exceeds a preset time, the server may determine that the user is continuously looking at a certain position, at this time, the server may receive the eye movements of the user collected by the pressure sensor, for example, the pressure sensor may determine that the user is blinking by recognizing the rules of the eye muscles of the user, and when the user performs the target eye movements (for example, blinks 2 times), the server may enlarge the target image, so that the user may observe details in the target image.
The method comprises the steps of firstly receiving images collected by a first camera arranged on the intelligent glasses, determining the gazing direction of a user according to the images, then obtaining images corresponding to the gazing direction of the user, generating target images, and returning the target images to the intelligent glasses so that the intelligent glasses display the target images to the user. According to the method, intelligent glasses are introduced, a server of the intelligent glasses receiving background generates images comprising different ranges based on machine vision, the images in the different ranges are displayed to a user, the problem that clients needing help cannot be timely perceived due to limited human eye observation range is solved, and the user is helped to timely find the clients needing help.
Based on the method provided by the embodiment of the application, the embodiment of the application also provides an image display device corresponding to the method. The units/modules described in the embodiments of the present application may be implemented by software, or may be implemented by hardware. Wherein the names of the units/modules do not constitute a limitation of the units/modules themselves in some cases.
Referring to fig. 2, the image display apparatus 200 includes:
an acquisition module 201, configured to acquire an image acquired by the first camera, and send the image acquired by the first camera to a server;
the receiving module 202 is configured to receive a target image returned by the server, where the target image is generated by the server according to a gaze direction of a user determined by the image acquired by the first camera and an image corresponding to the gaze direction;
and the display module 203 is used for displaying the target image to the user.
In some possible implementations, the gaze direction of the user is determined by:
the server determines the center point of the image acquired by the first camera and the position information of the user according to the image acquired by the first camera;
and the server determines the gazing direction of the user according to the central point of the image acquired by the first camera, the position information of the user and the panorama of the environment where the user is located.
In some possible implementations, the panorama of the environment in which the user is located is generated by:
the server receives a plurality of images acquired by a plurality of second cameras, and the plurality of second cameras are installed in the environment where the user is located;
and the server generates a panoramic image of the environment where the user is located according to the images acquired by the second cameras.
In some possible implementations, the target image is generated by:
the server acquires images corresponding to the gazing directions of the users according to the images acquired by the second cameras;
and the server generates a target image according to the image corresponding to the gazing direction of the user and the image range corresponding to the intelligent glasses, so that the intelligent glasses receive the target image.
In some possible implementations, the image range corresponding to the smart glasses is determined by:
acquiring an intelligent glasses identifier and an image range attribute input by a user, wherein the image range attribute comprises one or more of image range size and image range shape;
and determining the image range corresponding to the intelligent glasses according to the intelligent glasses identification and the image range attribute.
In some possible implementations, the receiving module 202 is further configured to:
when the user executes the target eye action, receiving the target image amplified by the server;
the display module 203 is further configured to:
and displaying the amplified target image to the user.
The image display apparatus 200 according to the embodiment of the present application may correspond to performing the method described in the embodiment of the present application, and the above and other operations and/or functions of each module/unit of the image display apparatus 200 are respectively for implementing the corresponding flow of each method in the embodiment shown in fig. 1, which is not described herein for brevity.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. Referring to the schematic structural diagram of the server 300 for implementing image display shown in fig. 3, it should be noted that the server shown in fig. 3 is only an example, and should not impose any limitation on the functions and application scope of the embodiments of the present application.
As shown in fig. 3, the server 300 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various suitable actions and processes in accordance with programs stored in a Read Only Memory (ROM) 302 or loaded from a storage device 308 into a Random Access Memory (RAM) 303. In the RAM303, various programs and data required for the operation of the server 300 are also stored. The processing device 301, the ROM 302, and the RAM303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the server 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows a server 300 having various devices, it is to be understood that not all illustrated devices are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
The present application also provides a computer-readable storage medium, also referred to as a machine-readable medium. In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the computer readable medium described in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal that propagates in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium carries one or more programs which, when executed by the server, cause the server to: receiving an image acquired by a first camera sent by intelligent glasses, wherein the first camera is arranged on the intelligent glasses; determining the gazing direction of a user according to the image acquired by the first camera; acquiring an image corresponding to the user's gaze direction, and generating a target image according to the image corresponding to the user's gaze direction; and returning the target image to the intelligent glasses so that the intelligent glasses display the target image to a user.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via a communications device, or from a storage device. The above-described functions defined in the methods of the embodiments of the present application are performed when the computer program is executed by a processing device.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.
While several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present application. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the disclosure. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.

Claims (8)

1. An image display method, which is applied to a server, comprises the following steps:
receiving an image acquired by a first camera sent by intelligent glasses, wherein the first camera is arranged on the intelligent glasses;
determining the gazing direction of a user according to the image acquired by the first camera;
the determining the gaze direction of the user comprises:
determining the position information of the center point through the center point of the image and the panorama of the environment where the user is located;
determining the connection line direction of the position information of the user and the position information of the center point as the gazing direction of the user;
acquiring an image corresponding to the user's gaze direction, and generating a target image according to the image corresponding to the user's gaze direction;
the image corresponding to the gazing direction of the user comprises:
the second camera collects images of the environment in the gazing direction of the user, and the image range of the images is larger than that of the images collected by the first camera;
returning the target image to the intelligent glasses so that the intelligent glasses display the target image to a user;
the second camera is installed in the environment where the user is located.
2. The method of claim 1, wherein the panoramic view of the environment in which the user is located is generated by:
receiving a plurality of images acquired by a plurality of second cameras;
and generating a panoramic image of the environment where the user is located according to the plurality of images acquired by the plurality of second cameras.
3. The method of claim 1, wherein the obtaining the image corresponding to the gaze direction of the user, generating the target image from the image corresponding to the gaze direction of the user, comprises:
acquiring images corresponding to the gazing direction of the user according to a plurality of images acquired by a plurality of second cameras;
generating a target image according to an image corresponding to the gazing direction of the user and an image range corresponding to the intelligent glasses;
the image range corresponding to the intelligent glasses is determined through the following steps:
acquiring an intelligent glasses identifier and an image range attribute input by a user, wherein the image range attribute comprises one or more of image range size and image range shape;
and determining the image range corresponding to the intelligent glasses according to the intelligent glasses identification and the image range attribute.
4. The method according to claim 1, wherein the method further comprises:
and amplifying the target image when the user executes the target eye action.
5. An image display method is characterized by being applied to intelligent glasses, wherein a first camera is installed on the intelligent glasses, and the method comprises the following steps:
acquiring an image acquired by the first camera, and sending the image acquired by the first camera to a server;
receiving a target image returned by the server, wherein the target image is generated by the server according to the image acquired by the first camera, determining the gazing direction of the user and the image corresponding to the gazing direction;
the determining the gaze direction of the user comprises:
determining the position information of the center point through the center point of the image and the panorama of the environment where the user is located;
determining the connection line direction of the position information of the user and the position information of the center point as the gazing direction of the user;
the image corresponding to the gazing direction comprises:
the second camera collects images of the environment in the gazing direction of the user, and the image range of the images is larger than that of the images collected by the first camera;
displaying the target image to the user;
the second camera is installed in the environment where the user is located.
6. An image display device, wherein a first camera is mounted to the device, the device comprising:
the acquisition module is used for acquiring the image acquired by the first camera and sending the image acquired by the first camera to a server;
the receiving module is used for receiving a target image returned by the server, wherein the target image is generated by the server according to the gaze direction of the user determined by the image acquired by the first camera and the image corresponding to the gaze direction;
the determining the gaze direction of the user comprises:
determining the position information of the center point through the center point of the image and the panorama of the environment where the user is located;
determining the connection line direction of the position information of the user and the position information of the center point as the gazing direction of the user;
the image corresponding to the gazing direction comprises:
the second camera collects images of the environment in the gazing direction of the user, and the image range of the images is larger than that of the images collected by the first camera; the second camera is installed in the environment where the user is located;
and the display module is used for displaying the target image to the user.
7. A server comprising a processor and a memory, the memory having instructions stored therein, the processor executing the instructions to cause the server to perform the method of any of claims 1-4.
8. A smart glasses comprising the image display device of claim 6, such that the smart glasses perform the method of claim 5.
CN202210762637.9A 2022-06-30 2022-06-30 Image display method, related device, server and intelligent glasses Active CN115134528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210762637.9A CN115134528B (en) 2022-06-30 2022-06-30 Image display method, related device, server and intelligent glasses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210762637.9A CN115134528B (en) 2022-06-30 2022-06-30 Image display method, related device, server and intelligent glasses

Publications (2)

Publication Number Publication Date
CN115134528A CN115134528A (en) 2022-09-30
CN115134528B true CN115134528B (en) 2024-02-20

Family

ID=83381162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210762637.9A Active CN115134528B (en) 2022-06-30 2022-06-30 Image display method, related device, server and intelligent glasses

Country Status (1)

Country Link
CN (1) CN115134528B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561635A (en) * 2011-05-11 2014-02-05 谷歌公司 Gaze tracking system
CN108366206A (en) * 2015-06-11 2018-08-03 广东欧珀移动通信有限公司 A kind of image pickup method and system based on rotating camera and intelligent glasses
KR101931295B1 (en) * 2018-09-06 2018-12-20 김주원 Remote image playback apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561635A (en) * 2011-05-11 2014-02-05 谷歌公司 Gaze tracking system
CN108366206A (en) * 2015-06-11 2018-08-03 广东欧珀移动通信有限公司 A kind of image pickup method and system based on rotating camera and intelligent glasses
KR101931295B1 (en) * 2018-09-06 2018-12-20 김주원 Remote image playback apparatus

Also Published As

Publication number Publication date
CN115134528A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN111046744B (en) Method and device for detecting attention area, readable storage medium and terminal equipment
US9509916B2 (en) Image presentation method and apparatus, and terminal
CN104995865B (en) Service based on sound and/or face recognition provides
US11436863B2 (en) Method and apparatus for outputting data
EP2958035A1 (en) Simulation system, simulation device, and product description assistance method
CN108961165B (en) Method and device for loading image
CN110136054B (en) Image processing method and device
CN112261340B (en) Visual field sharing method and device, electronic equipment and readable storage medium
CN112578971A (en) Page content display method and device, computer equipment and storage medium
CN107622241A (en) Display methods and device for mobile device
CN107592520B (en) Imaging device and imaging method of AR equipment
CN115134528B (en) Image display method, related device, server and intelligent glasses
CN110662015A (en) Method and apparatus for displaying image
CN108595011A (en) Information displaying method, device, storage medium and electronic equipment
CN115576470A (en) Image processing method and apparatus, augmented reality system, and medium
CN114742561A (en) Face recognition method, device, equipment and storage medium
CN112540673A (en) Virtual environment interaction method and equipment
US10915753B2 (en) Operation assistance apparatus, operation assistance method, and computer readable recording medium
CN112884538A (en) Item recommendation method and device
CN114740974A (en) Data processing method and electronic equipment
CN116738088A (en) Display method, display device, electronic equipment and storage medium
CN114668365A (en) Vision detection method
CN116193246A (en) Prompt method and device for shooting video, electronic equipment and storage medium
CN117749978A (en) Method, non-transitory computer readable medium, and terminal device
CN117148965A (en) Interactive control method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant