CN115185437A - Projection interaction method, device, medium and program product - Google Patents

Projection interaction method, device, medium and program product Download PDF

Info

Publication number
CN115185437A
CN115185437A CN202210726966.8A CN202210726966A CN115185437A CN 115185437 A CN115185437 A CN 115185437A CN 202210726966 A CN202210726966 A CN 202210726966A CN 115185437 A CN115185437 A CN 115185437A
Authority
CN
China
Prior art keywords
information
target user
image information
computer
annotation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210726966.8A
Other languages
Chinese (zh)
Inventor
廖春元
方中慧
杨浩
林祥杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hiscene Information Technology Co Ltd
Original Assignee
Hiscene Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hiscene Information Technology Co Ltd filed Critical Hiscene Information Technology Co Ltd
Publication of CN115185437A publication Critical patent/CN115185437A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0489Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An object of the present application is to provide a projection interaction method, device, medium, and program product, specifically including: acquiring corresponding top image information through a top camera device, and transmitting the top image information to corresponding target user equipment for the target user equipment to present the top image information; acquiring image annotation information of a target user corresponding to target user equipment and related to the top image information, wherein the image annotation information comprises corresponding annotation information and annotation position information of the annotation information, and the annotation information is determined by a first user operation of the target user; and determining corresponding projection position information based on the labeling position information, and projecting and presenting the labeling information based on the projection position information. The application can provide more real and natural augmented reality interaction for the current user while interesting interaction can be carried out for the current user.

Description

Projection interaction method, device, medium and program product
This application claims priority of CN 202210241557.9 (application No. 2022-03-11)
Technical Field
The present application relates to the field of communications, and more particularly, to a technique for projection interaction.
Background
Augmented Reality technology (Augmented Reality): the technology is a technology for calculating the position and the angle of a camera image in real time and adding corresponding digital information such as virtual three-dimensional model animation, video, characters, pictures and the like, and aims to sleeve a virtual world on a screen in the real world and perform interaction. The video call technology generally refers to a communication mode for transmitting human voice and images in real time between intelligent devices based on the internet and a mobile internet terminal. The existing computer equipment lacks other interactive capabilities besides video communication.
Disclosure of Invention
An object of the present application is to provide a projection interaction method, apparatus, medium, and program product.
According to one aspect of the application, a projection interaction method is provided, wherein the method is applied to a computer device, the computer device comprises a top camera and a projection device, and the method comprises the following steps:
acquiring corresponding top image information through the top camera device, and transmitting the top image information to corresponding target user equipment for the target user equipment to present the top image information;
acquiring image annotation information of the target user equipment corresponding to the target user and about the top image information, wherein the image annotation information comprises corresponding annotation information and annotation position information of the annotation information, and the annotation information is determined by a first user operation of the target user;
and determining corresponding projection position information based on the labeling position information, and projecting and presenting the labeling information based on the projection position information.
According to another aspect of the present application, there is provided a projection interaction method, applied to a target user equipment, the method including:
receiving and presenting top image information transmitted by corresponding computer equipment and collected by corresponding top camera device,
acquiring a first user operation of a target user corresponding to target user equipment, and generating annotation information about the top image information based on the first user operation;
and returning the labeling information to the computer equipment, so that the computer equipment presents the labeling information through a corresponding projection device.
According to an aspect of the application, there is provided a computer apparatus for projecting an interaction, the computer apparatus comprising a top camera and a projection device, the apparatus comprising:
the one-to-one module is used for acquiring corresponding top image information through the top camera device, transmitting the top image information to corresponding target user equipment and enabling the target user equipment to present the top image information;
a second module, configured to obtain image annotation information of the target user corresponding to the target user and related to the top image information, where the image annotation information includes corresponding annotation information and annotation location information of the annotation information, and the annotation information is determined by a first user operation of the target user;
and the three modules are used for determining corresponding projection position information based on the labeling position information and displaying the labeling information in a projection mode based on the projection position information.
According to another aspect of the application, a target user device for projecting an interaction is provided, wherein the device comprises:
the first module is used for receiving and presenting top image information which is transmitted by corresponding computer equipment and acquired by corresponding top camera devices;
a second module, configured to obtain a first user operation of a target user corresponding to a target user device, and generate annotation information related to the top image information based on the first user operation;
and the second and third modules are used for returning the marking information to the computer equipment, so that the computer equipment presents the marking information through a corresponding projection device.
According to an aspect of the present application, there is provided a computer apparatus, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the steps of the method as described in any one of the above.
According to an aspect of the application, there is provided a computer readable storage medium having stored thereon a computer program/instructions, characterized in that the computer program/instructions, when executed, cause a system to perform the steps of performing the method as described in any of the above.
According to an aspect of the application, there is provided a computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the steps of the method as described in any of the above.
Compared with the prior art, the image annotation information of the target user is displayed by the projection of the two-party interaction at the computer equipment terminal, so that interesting interaction can be performed for the current user, and more real and natural augmented reality interaction can be provided for the current user. In particular, parent-child companion interaction and participation experience can be enhanced for parents who are not at the child's side.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a system topology diagram of a projection interaction according to one embodiment of the present application;
FIG. 2 illustrates a method flow diagram of a method of projection interaction, according to one embodiment of the present application;
FIG. 3 illustrates a method flow diagram of a method of projection interaction, according to one embodiment of the present application;
FIG. 4 illustrates functional modules of a computer device according to one embodiment of the present application;
FIG. 5 illustrates functional modules of a target user equipment according to one embodiment of the present application;
FIG. 6 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase-Change Memory (PCM), programmable Random Access Memory (PRAM), static Random-Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash Memory or other Memory technology, compact Disc Read Only Memory (CD-ROM), digital Versatile Disc (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product capable of performing human-computer interaction with a user, such as a smart phone, a tablet computer, a smart desk lamp, and the like, and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, and the like. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 illustrates an exemplary scenario of the present application, in which a computer device 100 establishes a communication connection with a target user device 200, and the computer device 100 transmits corresponding top image information to the target user device 200; the target user device 200 receives the top image information and determines corresponding annotation information or image annotation information based on the top image information, and then returns the annotation information or image annotation information to the computer device 100. The computer device 100 includes, but is not limited to, any electronic product capable of human-computer interaction with a user, such as a smart phone, a tablet computer, a smart desk lamp, a smart projection device, and the like. The computer device comprises a top camera device for collecting image information related to an operation object (such as a camera or a depth camera) from the upper side (such as the right upper side or the oblique upper side) of the operation object of a current user (such as a book which the user is reading or a working part of the operation) belonging to the computer device. The target user equipment includes, but is not limited to, any mobile electronic product capable of human-computer interaction with a user, such as a smart phone, a tablet computer, a personal computer, etc., and includes a display device for presenting the top image information, such as a liquid crystal display or a projector, etc.; the target user equipment further includes an input device, configured to collect annotation information or image annotation information of the user about the top image information, where the annotation information includes, but is not limited to, label information such as a sticker, a text, a graphic, a video, a scribble, a 2D label, or a 3D label about an interactive object in the top image information, and the corresponding image annotation information includes the annotation information and corresponding annotation location information, such as image coordinate information of the interactive object of the annotation information or the annotation information in an image coordinate system, where the annotation location information is only an example, and is not limited herein. In this case, in the present application, the data transmission between the target user equipment and the computer equipment may be performed based on a direct communication connection between the two equipment, or may be completed by forwarding through a corresponding server, and the like.
FIG. 2 illustrates a method of projection interaction applied to a computer device 100, applicable to the system topology shown in FIG. 1, and including step S101, step S102, and step S103, according to an aspect of the subject application. In step S101, acquiring corresponding top image information by the top camera, and transmitting the top image information to a corresponding target user device for the target user device to present the top image information; in step S102, image annotation information about the top image information, corresponding to a target user, of the target user device is obtained, where the image annotation information includes corresponding annotation information and annotation location information of the annotation information, and the annotation information is determined by a first user operation of the target user; in step S103, corresponding projection location information is determined based on the annotation location information, and the annotation information is presented by projection based on the projection location information.
Specifically, in step S101, the top image information is acquired by the top camera, and the top image information is transmitted to the corresponding target user equipment, so that the target user equipment presents the top image information. For example, a current user (e.g., user a) holds a computer device, through which communication with a target user device held by a target user (e.g., user b) is possible, such as establishing a communication connection between the computer device and the target user device in a wired or wireless manner, or performing data communication transmission between the computer device and the target user device via a network device, and the like. The computer apparatus includes a top camera for acquiring operation object-related image information of a current user, for example, the top camera acquires image information related to an operation object (for example, directly above or obliquely above or the like) from above the operation object, and in some cases, the top camera is disposed directly above the operation object, and an optical axis of the top camera can pass through a shape or a center of the operation object or the like. Of course, in order to enable the top camera to meet the requirement, the computer device is usually provided with a forward extension rod, the top camera is arranged right below the extension rod, and image information of the lower part is acquired, and an operation area corresponding to an operation object is arranged below the extension rod and between the computer device and the belonging user, and the operation area may be an extension area of the computer device itself below the computer device (for example, top image information can be acquired relatively accurately through the extension area of the computer device itself), or a blank area is arranged between the computer device and the belonging user as a corresponding operation area (for example, a blank desktop is arranged as an operation area, and the like). Of course, in some cases, the base of the computer device is kept in a horizontal state, so as to keep the computer device stable, and accordingly, the surface where the operation area is located is kept horizontal corresponding to the optical axis of the top camera device facing vertically downward, so that the distances of the areas of the operation object in the top image information collected by the computer device relative to the top camera device are close.
The computer equipment collects top image information about an operation object through a top camera device and sends the top image information to target user equipment directly or through network equipment. In some embodiments, the computer device may first identify (e.g., preset template features, identify and track through corresponding template features), etc. the top image information to ensure that the corresponding operation object exists in the top image information. If no operation object exists in the current top image information, the computer equipment can adjust the shooting angle of the top camera device and collect images of other areas, so that the fact that the operation object exists in the top image information is ensured, for example, the shooting pose information of the top camera device is adjusted by adjusting the extension angle and the height of the extension rod or directly adjusting the shooting pose information of the top camera device, and the effect of changing the shooting angle of the top camera device is achieved. And if no operation object exists after the operation objects are collected at all angles or no operation object exists after a certain amount of top image information is continuously collected, presenting corresponding prompt information by the computer equipment, and prompting that no operation object exists in the current operation area of the current user. And if the operation object exists in the current top image information, transmitting the top image information to the target user equipment. And after receiving the top image information, the target user equipment presents the top image information, for example, displays the top image information through a display screen or displays the top image information through a projector in a projection mode.
And the target user equipment receives and presents the top image information for the target user to carry out operation interaction and the like based on the top image information. The target user equipment can acquire the label information of the target user about the interactive object in the top image information through the input device while presenting the top image information, the interactive position of the interactive object can be predetermined, the interactive position can be determined by the current user and transmitted to the target user equipment, and the interactive position can be determined based on the operation position (for example, the touch position, the position of a cursor, the gesture recognition result or the voice recognition result) of the target user. If the interactive position in the top image information is predetermined or determined by the current user, the target user equipment directly transmits the annotation information to the computer equipment; and if the interactive position in the top image information is determined by the operation position of the target user, the target user equipment determines the interactive position as the annotation position information, generates corresponding image annotation information by combining the annotation information, returns the image annotation information to the computer equipment and the like.
In step S102, image annotation information about the top image information, corresponding to a target user, of the target user device is obtained, where the image annotation information includes corresponding annotation information and annotation location information of the annotation information, and the annotation information is determined by a first user operation of the target user. For example, the top image information includes an image coordinate system established based on the top image information, the image coordinate system establishes a corresponding image/pixel coordinate system with a certain pixel point (for example, a pixel point at the top left corner of the image) as an origin, a horizontal axis as an X axis, and a vertical axis as a Y axis, and the corresponding annotation position information includes coordinate position information of the corresponding annotation information or the interactive object of the annotation information in the image coordinate system, and the coordinate position information may be a coordinate set for indicating a center position or an area range of the interactive object of the annotation information or the annotation information. The labeling position information may be determined based on user operations of the current user and the target user, or may be preset. The corresponding annotation information is determined by the target user equipment based on a first user operation of the target user on the top image, for example, the corresponding annotation information is determined based on mark information added by the target user in a mode of mouse input, keyboard input, touch screen input, gesture input or voice input operation and the like; in some cases, the annotation position information is determined based on the click position of the corresponding mouse, the corresponding position of the keyboard input, the touch position of the touch screen, the gesture recognition, and the voice recognition result in the image coordinate information of the top image information, or based on the click position of the mouse, the corresponding position of the keyboard input, the touch position of the touch screen, the gesture recognition, and the voice recognition result, the corresponding interactive object is determined in the top image information first, and then the corresponding image annotation position is determined based on the interactive object, and the like.
The annotation information includes, but is not limited to, label information such as stickers, texts, graphics, videos, graffiti, 2D labels, or 3D labels about the interactive objects in the top image information, and in some embodiments, the presentation forms of the corresponding image location information are also different based on different types of annotation information.
In step S103, corresponding projection location information is determined based on the annotation location information, and the annotation information is presented by projection based on the projection location information. For example, the projection device is disposed near the top camera, such as a corresponding projector and top camera are disposed on the top extension rod. The mapping relation between the corresponding projection device and the top camera device can be obtained through calculation, for example, s1, a calibration image containing a specific pattern (such as a checkerboard) is projected to an operation area by using the projection device; s2: the top camera collects a video image containing an operation area; s3, identifying coordinate information of each pattern in the display picture by using the image acquired in the step s 2; s4, establishing a corresponding relation between the pattern in the s1 projection original calibration image and the pattern coordinate of the video image which contains the operation area and is collected in the s 2; s5, estimating internal and external parameters of the camera and distortion parameters according to the two types of coordinates in s 4; and s6, obtaining parameters by using the s5 to realize mapping between the two images. It should be understood by those skilled in the art that the above-mentioned method for calculating the mapping relationship between the projection apparatus and the top camera is only an example, and other existing or future methods for calculating the mapping relationship between the projection apparatus and the top camera may be applied to the present application, and are included in the scope of the present application and are incorporated herein by reference. Based on the mapping relationship between the projection device and the top camera device, the annotation position information may be converted into projection position information, such as projection coordinate information of the annotation information in a projection image coordinate system corresponding to the projection image, where the projection position information is merely an example and is not limited herein. And the computer equipment projects the labeling information to the corresponding operation area according to the corresponding projection coordinate information, so that the labeling information is presented in the corresponding area in the operation object.
In some embodiments, the computer device further includes a display device, and the method further includes a step S104 (not shown), in which step S104, target image information transmitted by the target user device is received, and the target image information is presented through the display device. For example, the computer device further includes a display device for presenting image information and the like stored or received by the computer device, such as a liquid crystal display and the like. In some cases, the display device is positioned directly in front of or near the computer device facing the current user to facilitate viewing of the corresponding image information by the current user. In the communication process between the computer equipment and the target user equipment, in order to facilitate communication between two users and improve the instantaneity and communication efficiency of interaction, the target user equipment is provided with a corresponding camera device for collecting target image information of a target user equipment end, the target image information comprises image information about the target user, the target user equipment can transmit the target image information to the computer equipment, and the computer equipment receives and displays the target image information through a display device. In some cases, the target image information and the corresponding top image information are included in a video stream acquired in real time, so that both the computer device and the target user equipment can present the top image information, the real-time video stream corresponding to the target image information, and the like through the corresponding display device, and the corresponding video stream includes not only an image acquired by the camera device but also voice information acquired by the voice input device and the like; the corresponding computer equipment and the target user equipment display the real-time video streams corresponding to the target image information and the top image information, and simultaneously play corresponding voice information and the like through the voice output device, namely the target user equipment and the computer equipment carry out audio and video communication.
In some embodiments, the computer device further comprises a front camera device, wherein the front camera device is used for acquiring front image information of a current user corresponding to the computer device; wherein the method further comprises a step S105 (not shown), in which step S105, the pre-image information is transmitted to the target user equipment for the target user equipment to present the pre-image information. For example, the computer device further comprises a front camera for collecting image information related to the current user of the holder of the computer device, the front camera being arranged on the side of the computer device facing the current user, e.g. above the display device. In some cases, the front camera device is mainly used for collecting image information related to the head of a user of a current user, and is used for realizing video interactive communication between the current user and a target user. The corresponding front-facing camera device collects front-facing image information about a current user when being started, and transmits the front-facing image information to target user equipment for display by a display device of the target user equipment. The enabling state of the front camera may be turned on based on a video setup request of a target user device or a computer device, or switched from the enabling state of the top camera, or turned on based on a trigger of a separate enabling control of the front camera, or the like.
In some embodiments, the method further includes step S106 (not shown), in step S106, acquiring a camera switching request about a current video interaction of the computer device and the target user device, wherein image information of the current video interaction includes the pre-image information; in step S101, in response to the camera switching request, the front camera is turned off and the top camera is turned on, the top camera collects corresponding top image information, and transmits the top image information to the corresponding target user equipment, so that the target user equipment presents the top image information. For example, only one of the front camera and the top camera of the computer device is in an enabled state at the same time, so that the bandwidth pressure of video interaction is reduced, and the efficiency and the orderliness of the video interaction process are ensured. In some embodiments, the computer device is provided with a corresponding camera switching control, where the camera switching control may be a physical button provided on the computer device, or may also be a virtual control presented in a current screen, and the camera switching control is configured to switch from an enabled state of the front-facing camera to an enabled state of the top-facing camera, and in some cases, the camera switching control is further configured to adjust the computer device from the enabled state of the top-facing camera to the enabled state of the front-facing camera, in other words, the camera switching control is configured to switch between the enabled states of the top-facing camera and the front-facing camera; in other cases, the camera switching control is only used for switching from the starting state of the front camera to the starting state of the top camera, the computer equipment is further provided with a corresponding camera restoration control, and the camera restoration control is used for adjusting the starting state of the top camera to the starting state of the front camera. In other embodiments, the computer device determines the camera switching request by recognizing an interactive input operation such as a gesture, voice, head movement, and the like of a current user. In other embodiments, similarly, the target user device may also be provided with a corresponding camera switching control, or the target user device determines the camera switching request by recognizing interactive input operations such as a gesture, voice, head movement, and the like of the target user, which is not described herein again. The method comprises the steps that computer equipment is in a state of acquiring front image information in a video interaction process, when touch control operation of a user (for example, the current user or a target user) about a camera shooting switching control is acquired, the computer equipment closes a front camera shooting device and starts a top camera shooting device, corresponding top image information is acquired through the top camera shooting device, and the top image information is transmitted to corresponding target user equipment to be presented by the target user equipment. Further, the target user can perform first user operation on the top image information to determine icon labeling information of the top image information, the computer device determines projection position information based on the labeling position information and projects and presents the labeling information based on the projection position information, and therefore the target user can intuitively guide an operation object of the current user.
In some cases, the enabled states of different cameras correspond to different interaction modes, for example, when the computer device only turns on the front camera, the computer device is in a video communication mode for implementing video communication between the current user and the target user; when the computer equipment only starts the top camera device at present, the computer equipment is in a video guidance mode and is used for realizing communication guidance and the like of a target user about an operation object of a current user; when the front camera device and the top camera device are simultaneously started at present, the computer device is convenient for the target user to communicate with the current user through video and the like while realizing the communication guidance of the target user about the operation object of the current user.
In some embodiments, in step S105, a camera switching request transmitted by the target user device regarding a current video interaction of the computer device with the target user device is received, where the camera switching request is determined based on a second user operation of the target user. For example, the target user device may generate a camera switching request for the current video interaction based on a second user operation (e.g., a trigger operation on a camera switching control, or an interactive input operation instruction on camera switching, etc.) of the target user, and transmit the camera switching request to the corresponding computer device; or the target user equipment sends the second user operation to a corresponding server, the server generates a corresponding camera switching request according to the second user operation, and sends the camera switching request to the computer equipment and the like. Here, the first user operation and the second user operation are only used for distinguishing the user operation corresponding action, and do not relate to the association of operations such as order and size. After the video interaction is established, when the front image information of the front camera device is acquired and presented, in some embodiments, a corresponding camera switching control is presented in the current screen, the target user device may determine the camera switching request by acquiring a second user operation, for example, a touch operation of the target user with respect to the camera switching control, and in other embodiments, the target user device may determine the camera switching request by acquiring an interactive input operation instruction of the second user operation, for example, a gesture, voice, head movement, and the like of the target user. The image pickup switching request usually occurs after the video is established, and when the video is established, the corresponding front-facing image pickup device may be simultaneously enabled based on the video establishment request, or the front-facing image pickup device may be enabled based on mutual operation between the target user equipment and the computer equipment after the target user equipment establishes communication with the computer equipment, or the front-facing image pickup device may be switched from an enabled state of the front-facing image pickup device to an enabled state of the front-facing image pickup device after the target user equipment establishes communication with the computer equipment and enables the top-facing image pickup device.
In some embodiments, in step S105, a video establishment request regarding the current video interaction is obtained; responding to the video establishing request, establishing the current video interaction between the computer equipment and the target user equipment based on the video establishing request, acquiring corresponding front-facing image information through the front-facing camera device, and transmitting the front-facing image information to the target user equipment so that the target user equipment can present the front-facing image information. For example, the current video interaction may be initiated based on a video establishment request determined by a user operation of a current user or a target user, and a video stream is transmitted between a target user device and a computer device at the beginning of the current video interaction, such as the computer device transmitting corresponding pre-image information to the target user device, the target user device transmitting corresponding target image information to the computer device, or only the computer device transmitting the pre-image information to the target user device separately. The video establishment request may be determined based on an initiated operation (for example, a triggering operation on a video establishment control, or an interactive input operation instruction on video establishment, etc.) of a current user at a computer device, where the video establishment request includes user identification information corresponding to a target user, and the user identification information includes, but is not limited to, unique mark information for identifying the target user, specifically, for example, a name, an image, an identification card, a mobile phone number, an application serial number, or device access control address information, etc.; the computer device may send the video establishment request to the network device for the network device to establish video interaction that is forwarded to the target user device and establishes both, or the computer device directly sends the video establishment request to the target user device and establishes both. For example, the video establishment request may be determined based on an initiation operation (for example, a trigger operation on a video establishment control, or an interactive input operation instruction on video establishment, etc.) of a target user on a target user equipment side, where the video establishment request includes user identification information corresponding to a current user; the target user equipment can send the video establishment request to the network equipment for the network equipment to establish video interaction which is forwarded to the computer equipment and establishes the video interaction between the computer equipment and the network equipment, or the target user equipment directly sends the video establishment request to the computer equipment and establishes the video interaction between the computer equipment and the computer equipment.
In some embodiments, the method further comprises a step S107 (not shown), in which step S107, a camera restore request regarding a current video interaction of the computer device with a target user device is obtained; and responding to the camera shooting restoration request, closing the top camera shooting device, starting the front camera shooting device, collecting corresponding front image information through the front camera shooting device, and transmitting the front image information to the corresponding target user equipment. For example, the camera restoration request is used to adjust the computer device from the enabled state of the top camera to the enabled state of the front camera, and the corresponding top camera is in the closed state after the restored state, and only the collection and transmission of the front image information are performed, for example, the computer device transmits the corresponding front image information to the target user device, and the target user device transmits the corresponding target image information to the computer device, or only the computer device transmits the front image information to the target user device alone. In some embodiments, the camera shooting restoration request is initiated based on a touch operation of a current user/target user on the camera shooting restoration control, in other embodiments, the camera shooting restoration request is initiated based on an interactive input operation instruction (such as a gesture, voice, head movement, and the like) of the current user/target user on the camera shooting restoration, and based on the camera shooting restoration request, the computer device turns off the top camera shooting device with the current video interaction in an enabled state, and enables the corresponding front camera shooting device, so that the video guidance for the operation object is restored to the video communication between the two parties.
In some embodiments, the method further includes step S108 (not shown), in step S108, acquiring a camera turn-on request for video interaction of the computer device with a target user device; in step S101, in response to the camera shooting start request, acquiring corresponding top image information by the top camera device, and transmitting the top image information to a corresponding target user device; in step S105, in response to the camera shooting start request, the front camera device collects corresponding front image information, and transmits the front image information to the target user equipment. For example, the camera shooting start request is used to start the front camera shooting device and the top camera shooting device simultaneously, and the camera shooting start request may be included in the video establishment request, and is used to start the front camera shooting device and the top camera shooting device simultaneously in the video establishment process, and present a closing control about the front camera shooting device and the top camera shooting device respectively, so that the current user and/or the target user performs a closing operation on the front camera shooting device or the top camera shooting device, and of course, if the front camera shooting device and the top camera shooting device are closed simultaneously, the video interaction between the computer device and the target user device is closed. In some cases, the camera shooting start request may also be to call the front camera shooting device or the top camera shooting device after the video interaction is established, that is, in the video interaction process (in the video interaction process, one of the front camera shooting device and the top camera shooting device is in the enabled state), so as to achieve an effect that the two camera shooting devices are simultaneously in the enabled state. In some embodiments, the camera turning-on request may be generated based on a touch operation of a current user or a target user on the turning-on control, and in other embodiments, the camera turning-on request may be generated based on an interactive input operation instruction (such as a gesture, voice, head movement, and the like) of the current user or the target user on camera turning-on, and two cameras are simultaneously enabled based on a response of the computer device or the target user device to the camera turning-on request.
In some embodiments, the computer device comprises an illumination apparatus; the method further includes step S109 (not shown), and in step S109, if a request for activating the lighting device is obtained, the lighting device is turned on. For example, the computer device may comprise a lighting means for adjusting the operating area brightness. The enabling request is used for turning on the corresponding lighting device, and projecting light with certain intensity to the operation area so as to change the ambient brightness, and the enabling request may be generated by the computer device based on the operation of the current user, or may be generated based on the operation of the target user corresponding to the target user by the target user device (such as touch control on the lighting control) and transmitted to the computer device, and the like. In some cases, the enabling request includes a corresponding video establishment request for turning on the lighting device while the video interaction establishment is in progress; in other cases, the activation request is determined based on user operation by the target/current user during or without video interaction, thereby turning on the lighting device to adjust the ambient brightness.
In some embodiments, the computer device further comprises an ambient light detection means, and the method further comprises step S110 (not shown), in step S110, based on the ambient light detection means acquiring illumination intensity information of the current environment, detecting whether the illumination intensity information satisfies a preset illumination threshold; if not, adjusting the lighting device until the illumination intensity information meets the preset lighting threshold value. For example, the lighting device of the computer apparatus includes a lighting device with adjustable brightness, for example, the lighting brightness of the lighting device may be adjusted based on a touch selection operation of a current user or a target user with respect to a brightness adjustment control, and the like, and for example, the lighting brightness of the lighting device may be adjusted based on an interactive input operation instruction (such as a gesture, voice, head movement, and the like) of the current user or the target user with respect to the brightness adjustment, and the like. In some cases, the computer device includes an ambient light detection device for cooperating with the lighting device to realize automatic adjustment of lighting light, so as to ensure controllability and applicability of ambient brightness. The computer device measures the illumination intensity information of the current environment based on the ambient light detection device, and compares the illumination intensity information with a preset illumination threshold, where the preset illumination threshold may be a specific illumination intensity value, or an interval composed of a plurality of illumination intensity values, and the like, and is not limited herein. If the illumination intensity information is the same as the illumination threshold value information or the intensity difference value is smaller than a preset difference value threshold value and the like, determining that the illumination intensity information meets a preset illumination threshold value; or if the illumination intensity information is within the illumination threshold interval, determining that the illumination intensity information meets a preset illumination threshold and the like. If not, calculating corresponding illumination adjustment information based on the illumination intensity information and illumination threshold information, and adjusting the corresponding illumination device based on the illumination adjustment information, wherein the illumination adjustment information comprises an increased or decreased illumination adjustment value and the like corresponding to the current illumination intensity information.
In some embodiments, the method further includes a step S111 (not shown), in which step S111, determining that the top image information corresponds to image brightness information based on the top image information, and detecting whether the image brightness information satisfies a preset brightness threshold; and if not, adjusting the lighting device until the image brightness information meets the preset brightness threshold. For example, the computer device may also determine corresponding image luminance information based on the top image information, e.g., calculate average luminance information of the current image information based on pixel luminance information of some (e.g., sampling some pixels, etc.) or all pixels in the top image information, etc. And comparing the image brightness information with a preset brightness threshold, where the preset brightness threshold may be a specific image brightness value, or an interval composed of multiple image brightness values, and the like, and is not limited herein. If the image brightness information is the same as the brightness threshold information or the brightness difference value is smaller than a preset difference threshold value and the like, determining that the image brightness information meets the preset brightness threshold value; or if the image brightness information is in the brightness threshold interval, determining that the image brightness information meets a preset brightness threshold and the like. If not, calculating corresponding illumination adjustment information based on the image brightness information and the brightness threshold information, and adjusting the corresponding lighting device based on the illumination adjustment information, wherein the illumination adjustment information comprises an increased or decreased illumination adjustment value and the like corresponding to the current illumination intensity information. In some cases, the computer device may also perform brightness adjustment of the lighting device based on image brightness information of a specific region of the top image information, which may be an interaction region (e.g., a boundary region, a circumscribed rectangular region, etc.) determined based on the interaction object or determined from the top image information based on a user operation of the target user/current user.
In some embodiments, the method further includes a step S112 (not shown), in which step S112, interactive position information of the current user with respect to the top image information is acquired; and determining corresponding virtual presentation information based on the interaction position information, and projecting and presenting the virtual presentation information through the projection device. For example, in the video interaction process, the interaction position information about the interaction object in the top image information may be obtained based on a user operation of a current user, for example, the current user presents corresponding top image information in the display device, and determines one or more pixel positions or pixel regions from the top image information based on a frame selection, click, touch or other operation of the user, and takes the one or more pixel positions or pixel regions as corresponding interaction position information, the interaction position information includes coordinate position information in an image coordinate system of the top image information, and the like. For example, the computer device determines a pointed position of a finger or a pen or the like by an image recognition technique based on a user operation (for example, pointing to a certain position with a finger or a pen tip or the like) of a current user on an operation object, and takes the pointed position as corresponding interaction position information.
The computer device may present the corresponding virtual presence information directly based on the interaction location information, for example, the computer device matches the virtual information corresponding to the interaction location information from the database, and for example, performs target identification on the interaction object corresponding to the interaction location information, so as to match the virtual information corresponding to the interaction location information in the database, determine the corresponding virtual information as the virtual presence information, and determine the projection location information of the virtual presence information through the interaction location information, so as to project the virtual presence information to the spatial location where the interaction object is located through the projection apparatus, and the like. As yet another example, in some embodiments, the computer device includes an infrared measurement device; wherein the acquiring of the interaction position information of the current user about the top image information comprises: and determining the interaction position information of the top image information through the infrared measuring device. For example, the computer device further comprises an infrared measuring device, in some embodiments, the infrared measuring device comprises an infrared camera and an infrared emitter, for example, the infrared camera and the top camera are arranged on the top extension rod together, the infrared emitter is arranged on the base of the computer device, the infrared emitter forms an invisible light film which is higher than the surface by a certain distance threshold value on the surface of the operation object, when a finger or any opaque object contacts the surface, light is reflected to the infrared camera, and the position of the finger or any opaque object contacting the operation object is obtained through accurate calculation of the photoelectric position. In other embodiments, the infrared measurement device includes an infrared camera and an infrared pen, and the infrared camera is capable of determining a position where the infrared pen touches the operation object when the user touches the surface of the operation object with the infrared pen. And determining corresponding interaction position information and the like based on the position of the finger, any opaque object or the infrared pen contacting the operation object.
In some embodiments, in step S102, the interaction location information is sent to the target user equipment, and annotation information returned based on the interaction location information and transmitted by the target user equipment is received, so as to determine image annotation information of the top image information, where the image annotation information includes the annotation information and the interaction location information, and the annotation information is determined by a first user operation of the target user. For example, the interaction location information is used to prompt the target user that the interaction object is located in the top image information, the target user device receives the top image information transmitted by the computer device and also receives the interaction location information, and collects user operation of the target user on the interaction object corresponding to the interaction location information, so as to determine corresponding annotation information based on the first user operation. The target user equipment directly returns the labeling information to the computer equipment, and the computer equipment combines the previously determined interactive position information based on the received labeling information to realize projection presentation and the like of the labeling information.
In some embodiments, the computer device further comprises a distance measuring device; in step S113, the method further includes step S113 (not shown), in which in step S113, distance information between the current user and the computer device is determined by the distance measuring apparatus, and if the distance information does not meet a preset distance threshold, the computer device sends a notification. For example, the distance measuring device is disposed on a side of the computer device parallel to the corresponding operation area and facing the current user, and is configured to measure real-time distance information between the computer device and the current user, such as a laser distance meter. The method comprises the steps that a corresponding distance threshold interval is set in the computer equipment, when the distance information between the computer equipment and a current user is in the distance threshold interval, the distance information is determined to meet a preset distance threshold, and the pose of the current user meets requirements. If the distance information between the computer equipment and the current user is out of the distance threshold interval, determining that the distance information does not meet the preset distance threshold, and sending a corresponding prompt notification by the computer equipment, such as reminding the current user of the posture needing to be adjusted through sound, image, vibration, characters and the like, so as to ensure that the corresponding distance information meets the preset distance threshold.
Fig. 3 illustrates a projection interaction method according to an aspect of the present application, wherein the method is applied to the system illustrated in fig. 1, and is applied to a target user equipment 200, and mainly includes step S201, step S202, and step S203. In step S201, top image information transmitted by the corresponding computer device and acquired by the corresponding top camera is received and presented; in step S202, a first user operation of a target user device corresponding to a target user is obtained, and annotation information about the top image information is generated based on the first user operation; in step S203, the annotation information is returned to the computer device, so that the computer device presents the annotation information through a corresponding projection apparatus.
For example, a current user (e.g., user a) holds a computer device, and the computer device can communicate with a target user device held by a target user (e.g., user b), such as establishing a communication connection between the computer device and the target user device in a wired or wireless manner, or performing data communication transmission between the computer device and the target user device via a network device, and the like. The computer apparatus includes a top camera for acquiring operation object-related image information of a current user, for example, the top camera acquires image information (e.g., directly above or obliquely above, etc.) about an operation object from above the operation object, and in some cases, the top camera is disposed directly above the operation object, and an optical axis of the top camera can pass through a shape or a center, etc., of the operation object. Of course, in order to enable the top camera to meet the requirement, the computer device is usually provided with a forward extension rod, the top camera is arranged just below the extension rod, the image information below the extension rod is collected, an operation area corresponding to an operation object is arranged below the extension rod and between the computer device and the user to which the computer device belongs, and the operation area may be an extension area of the computer device itself below the computer device (for example, the top image information can be accurately acquired through the extension area of the computer device itself), or a blank area is arranged between the computer device and the user to which the computer device belongs as a corresponding operation area (for example, a blank desktop is arranged as an operation area, etc.). Of course, in some cases, the base of the computer device is kept in a horizontal state, so as to keep the computer device stable, and accordingly, the surface where the operation area is located is kept horizontal corresponding to the optical axis of the top camera device facing vertically downward, so that the distances of the areas of the operation object in the top image information collected by the computer device relative to the top camera device are close.
The computer equipment collects top image information about an operation object through a top camera device and sends the top image information to target user equipment directly or through network equipment. In some embodiments, the computer device may first identify (e.g., preset template features, identify and track through corresponding template features), etc. the top image information to ensure that the corresponding operation object exists in the top image information. If no operation object exists in the current top image information, the computer equipment can adjust the shooting angle of the top camera device and collect images of other areas, so that the fact that the operation object exists in the top image information is ensured, for example, the shooting pose information of the top camera device is adjusted by adjusting the extension angle and the height of the extension rod or directly adjusting the shooting pose information of the top camera device, and the effect of changing the shooting angle of the top camera device is achieved. And if no operation object exists after the operation objects are collected at all angles or no operation object exists after a certain amount of top image information is continuously collected, presenting corresponding prompt information by the computer equipment, and prompting that no operation object exists in the current operation area of the current user. And if the operation object exists in the current top image information, transmitting the top image information to the target user equipment. And after receiving the top image information, the target user equipment presents the top image information, for example, displays the top image information through a display screen or displays the top image information through a projector in a projection mode.
And the target user equipment receives and presents the top image information for the target user to perform operation interaction and the like based on the top image information. The target user equipment can acquire the label information of the target user about the interactive object in the top image information through the input device while presenting the top image information, the interactive position of the interactive object can be predetermined, the interactive position can be determined by the current user and transmitted to the target user equipment, and the interactive position can be determined based on the operation position (for example, the touch position, the position of a cursor, the gesture recognition result or the voice recognition result) of the target user. If the interactive position in the top image information is predetermined or determined by the current user, the target user equipment directly transmits the annotation information to the computer equipment; and if the interactive position in the top image information is determined by the operation position of the target user, the target user equipment determines the interactive position as annotation position information, generates corresponding image annotation information by combining the annotation information, and returns the image annotation information to the computer equipment and the like. As in some embodiments, the method further comprises a step S204 (not shown) in which, in step S204, corresponding annotation location information is determined based on the first user operation; in step S203, the annotation information and the annotation location information are returned to the computer device, so that the computer device presents the annotation information through a corresponding projection device based on the annotation location information.
For example, the top image information includes an image coordinate system established based on the top image information, the image coordinate system establishes a corresponding image/pixel coordinate system with a certain pixel point (for example, a pixel point at the top left corner of the image) as an origin, a horizontal axis as an X axis, and a vertical axis as a Y axis, and the corresponding annotation position information includes coordinate position information of the corresponding annotation information or the interactive object of the annotation information in the image coordinate system, and the coordinate position information may be a coordinate set for indicating a center position or an area range of the interactive object of the annotation information or the annotation information. The labeling position information may be determined based on user operations of the current user and the target user, or may be preset. The corresponding annotation information is determined by the target user equipment based on a first user operation of the target user on the top image, for example, the corresponding annotation information is determined based on mark information added by the target user in a mode of mouse input, keyboard input, touch screen input, gesture input or voice input operation and the like; in some cases, the annotation position information is determined based on the image coordinate information of the click position of the corresponding mouse, the corresponding position of the keyboard input, the touch position of the touch screen, the gesture recognition and the voice recognition result in the top image information, or based on the click position of the mouse, the corresponding position of the keyboard input, the touch position of the touch screen, the gesture recognition and the voice recognition result, the corresponding interaction object is determined in the top image information, and then the corresponding image annotation position is determined based on the interaction object, and the like.
The annotation information includes, but is not limited to, label information such as stickers, texts, graphics, videos, graffiti, 2D labels, or 3D labels about the interactive objects in the top image information, and in some embodiments, the presentation forms of the corresponding image location information are also different based on different types of annotation information.
For example, the projection device is disposed near the top camera, such as a corresponding projector and top camera mounted on a top extension rod. The mapping relationship between the corresponding projection device and the top camera device may be obtained through calculation, and based on the mapping relationship between the projection device and the top camera device, the annotation position information may be converted into projection position information, such as projection coordinate information of the annotation information in a projection image coordinate system corresponding to the projection image, where the projection position information is only an example, and is not limited herein. And the computer equipment projects the labeling information to the corresponding operation area according to the corresponding projection coordinate information, so that the labeling information is presented in the corresponding area in the operation object.
The foregoing mainly describes embodiments of a projection interaction method according to the present application, and further provides a computer device and a target user device capable of implementing the embodiments, which are described below with reference to fig. 4 and 5.
FIG. 4 illustrates a computer device 100 for projecting interaction according to an aspect of the subject application, the device including a one-module 101, a two-module 102, and a three-module 103. A one-to-one module 101, configured to acquire corresponding top image information through the top camera device, and transmit the top image information to a corresponding target user equipment, so that the target user equipment presents the top image information; a second module 102, configured to obtain image annotation information of the target user corresponding to the target user and related to the top image information, where the image annotation information includes corresponding annotation information and annotation location information of the annotation information, and the annotation information is determined by a first user operation of the target user; and a third module 103, configured to determine corresponding projection location information based on the annotation location information, and present the annotation information based on the projection location information in a projection manner.
Here, the specific implementation corresponding to the one-to-one module 101, the two-to-two module 102, and the one-to-three module 103 shown in fig. 4 is the same as or similar to the embodiment of the step S101, the step S102, and the step S103 shown in fig. 2, and therefore, the detailed description is omitted, and the specific implementation is included herein by reference.
In some embodiments, the computer device further includes a display device, and the device further includes a fourth module (not shown) for receiving target image information transmitted by the target user device, and presenting the target image information through the display device.
In some embodiments, the computer device further comprises a front camera device, wherein the front camera device is used for acquiring front image information of a current user corresponding to the computer device; wherein the device further comprises a fifth module (not shown) for transmitting the pre-image information to the target user device for the target user device to present the pre-image information.
In some embodiments, the apparatus further includes a sixth module (not shown) for obtaining a camera switching request related to a current video interaction of the computer apparatus and the target user apparatus, wherein image information of the current video interaction includes the pre-image information; the one-to-one module 101 is configured to, in response to the camera switching request, close the front camera and start the top camera, acquire corresponding top image information through the top camera, and transmit the top image information to corresponding target user equipment, so that the target user equipment presents the top image information.
In some embodiments, the video camera switching module is configured to receive a video camera switching request transmitted by the target user device regarding a current video interaction of the computer device with the target user device, where the video camera switching request is determined based on a second user operation of the target user.
In some embodiments, a module for obtaining a video establishment request regarding the current video interaction; responding to the video establishing request, establishing the current video interaction between the computer equipment and the target user equipment based on the video establishing request, acquiring corresponding front-facing image information through the front-facing camera device, and transmitting the front-facing image information to the target user equipment so that the target user equipment can present the front-facing image information.
In some embodiments, the apparatus further comprises a seventh module (not shown) for obtaining a camera restore request regarding a current video interaction of the computer apparatus with a target user apparatus; and responding to the camera shooting restoration request, closing the top camera shooting image, starting the front camera shooting device, acquiring corresponding front image information through the front camera shooting device, and transmitting the front image information to the corresponding target user equipment.
In some embodiments, the apparatus further comprises an eight module (not shown) for obtaining a camera turn-on request for video interaction of the computer device with a target user device; the one-to-one module 101 is configured to respond to the camera shooting start request, acquire corresponding top image information through the top camera device, and transmit the top image information to corresponding target user equipment; and the first module and the fifth module are used for responding to the camera shooting starting request, acquiring corresponding front image information through the front camera shooting device, and transmitting the front image information to the target user equipment.
In some embodiments, the computer device comprises an illumination apparatus; the apparatus further includes a nine-module (not shown) for turning on the lighting device if the activation request for the lighting device is acquired.
In some embodiments, the computer device further comprises an ambient light detection means, and the device further comprises a tenth module (not shown) for detecting whether the illumination intensity information satisfies a preset illumination threshold value based on the ambient light detection means acquiring illumination intensity information of the current environment; if not, adjusting the lighting device until the illumination intensity information meets the preset lighting threshold value.
In some embodiments, the apparatus further comprises an eleven module (not shown) for determining that the top image information corresponds to image brightness information based on the top image information, detecting whether the image brightness information satisfies a preset brightness threshold; and if not, adjusting the lighting device until the image brightness information meets the preset brightness threshold.
In some embodiments, the apparatus further comprises a twenty-two module (not shown) for obtaining interaction location information of the current user with respect to the top image information; and determining corresponding virtual presentation information based on the interaction position information, and projecting and presenting the virtual presentation information through the projection device.
In some embodiments, the computer device comprises an infrared measurement device; wherein the acquiring of the interaction position information of the current user about the top image information comprises: and determining the interaction position information of the top image information through the infrared measuring device.
In some embodiments, the secondary module 102 is configured to send the interaction location information to the target user equipment, and receive annotation information returned based on the interaction location information and transmitted by the target user equipment, so as to determine image annotation information of the top image information, where the image annotation information includes the annotation information and the interaction location information, and the annotation information is determined by a first user operation of the target user.
In some embodiments, the computer device further comprises a distance measuring device; the device further includes a thirteenth module (not shown) configured to determine distance information between the current user and the computer device through the distance measuring apparatus, and if the distance information does not satisfy a preset distance threshold, the computer device sends a notification.
Here, the specific implementation corresponding to the four to thirteen modules is the same as or similar to the embodiment of the foregoing steps S104 to S113, and thus is not repeated here, and is included herein by way of reference.
Fig. 5 illustrates a target user device 200 for projection interaction according to an aspect of the subject application, which generally includes a two-in-one module 201, a two-in-two module 202, and a two-in-three module 203. A second module 201, configured to receive and present top image information transmitted by a corresponding computer device and acquired by a corresponding top camera device; a second module 202, configured to obtain a first user operation of a target user corresponding to a target user device, and generate annotation information about the top image information based on the first user operation; a second module 203 and a third module 203, configured to return the annotation information to the computer device, so that the computer device presents the annotation information through a corresponding projection apparatus.
Here, the specific implementation corresponding to the two-in-one module 201, the two-in-two module 202, and the two-in-three module 203 shown in fig. 5 is the same as or similar to the embodiment of the step S201, the step S202, and the step S203 shown in fig. 3, and thus is not repeated herein and is included herein by way of reference.
In some embodiments, the apparatus further comprises a twenty-four module (not shown) for determining corresponding annotation location information based on the first user operation; the second module 203 and the third module 203 are configured to return the annotation information and the annotation location information to the computer device, so that the computer device presents the annotation information through a corresponding projection device based on the annotation location information.
Here, the specific implementation corresponding to the two or four modules is the same as or similar to the embodiment of the step S204, and thus is not repeated here, and is included herein by way of reference.
In addition to the methods and apparatus described in the embodiments above, the present application also provides a computer readable storage medium storing computer code that, when executed, performs the method as described in any of the preceding claims.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 6 illustrates an exemplary system that can be used to implement the various embodiments described herein;
in some embodiments, as shown in FIG. 6, the system 300 can function as any of the above-described devices in the various described embodiments. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controllers of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it will be obvious that the term "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (22)

1. A projection interaction method is applied to a computer device, wherein the computer device comprises a top camera and a projection device, and the method comprises the following steps:
acquiring corresponding top image information through the top camera device, and transmitting the top image information to corresponding target user equipment for the target user equipment to present the top image information;
acquiring image annotation information of the target user equipment corresponding to the target user and related to the top image information, wherein the image annotation information comprises corresponding annotation information and annotation position information of the annotation information, and the annotation information is determined by a first user operation of the target user;
and determining corresponding projection position information based on the labeling position information, and projecting and presenting the labeling information based on the projection position information.
2. The method of claim 1, wherein the computer device further comprises a display device, the method further comprising:
and receiving target image information transmitted by the target user equipment, and presenting the target image information through the display device.
3. The method according to claim 1 or 2, wherein the computer device further comprises a front camera device, wherein the front camera device is used for collecting front image information of a corresponding current user of the computer device; wherein the method further comprises:
and transmitting the pre-image information to the target user equipment, so that the target user equipment can present the pre-image information.
4. The method of claim 3, wherein the method further comprises:
acquiring a camera shooting switching request of current video interaction between the computer equipment and the target user equipment, wherein the image information of the current video interaction comprises the front image information;
the acquiring, by the top camera device, corresponding top image information, and transmitting the top image information to a corresponding target user device, for the target user device to present the top image information, includes:
and responding to the camera shooting switching request, closing the front camera shooting device, starting the top camera shooting device, acquiring corresponding top image information through the top camera shooting device, and transmitting the top image information to corresponding target user equipment for the target user equipment to present the top image information.
5. The method of claim 4, wherein the obtaining a camera switching request regarding a current video interaction of the computer device with the target user device comprises:
receiving a camera shooting switching request transmitted by the target user equipment and related to the current video interaction between the computer equipment and the target user equipment, wherein the camera shooting switching request is determined based on a second user operation of the target user.
6. The method of claim 3 or 4, wherein the transmitting the pre-image information to the target user device for presentation by the target user device comprises:
acquiring a video establishment request related to the current video interaction;
responding to the video establishing request, establishing the current video interaction between the computer equipment and the target user equipment based on the video establishing request, acquiring corresponding front-facing image information through the front-facing camera device, and transmitting the front-facing image information to the target user equipment so that the target user equipment can present the front-facing image information.
7. The method of claim 4, wherein the method further comprises:
acquiring a camera shooting restoration request about current video interaction between the computer equipment and target user equipment;
and responding to the camera shooting restoration request, closing the top camera shooting image, starting the front camera shooting device, acquiring corresponding front image information through the front camera shooting device, and transmitting the front image information to the corresponding target user equipment.
8. The method of claim 3, wherein the method further comprises:
acquiring a camera shooting starting request related to video interaction between the computer equipment and target user equipment;
the acquiring, by the top camera device, corresponding top image information, and transmitting the top image information to a corresponding target user device, for the target user device to present the top image information, includes:
responding to the camera shooting starting request, acquiring corresponding top image information through the top camera shooting device, and transmitting the top image information to corresponding target user equipment;
wherein the transmitting the pre-image information to the target user equipment for the target user equipment to present the pre-image information comprises:
and responding to the camera shooting starting request, acquiring corresponding front-facing image information through the front-facing camera shooting device, and transmitting the front-facing image information to the target user equipment.
9. The method of claim 1, wherein the computer device comprises a lighting fixture; wherein the method further comprises:
and if the starting request of the lighting device is acquired, the lighting device is turned on.
10. The method of claim 9, wherein the computer device further comprises an ambient light detection means, the method further comprising:
acquiring illumination intensity information of the current environment based on the environment light detection device, and detecting whether the illumination intensity information meets a preset illumination threshold value;
if not, adjusting the lighting device until the illumination intensity information meets the preset lighting threshold value.
11. The method of claim 9, wherein the method further comprises:
determining image brightness information corresponding to the top image information based on the top image information, and detecting whether the image brightness information meets a preset brightness threshold value;
and if not, adjusting the lighting device until the image brightness information meets the preset brightness threshold.
12. The method of claim 1, wherein the method further comprises:
acquiring interaction position information of the current user about the top image information;
and determining corresponding virtual presentation information based on the interaction position information, and projecting and presenting the virtual presentation information through the projection device.
13. The method of claim 12, wherein the computer device comprises an infrared measurement device; wherein the acquiring of the interaction position information of the current user about the top image information comprises:
and determining the interaction position information of the top image information through the infrared measuring device.
14. The method of claim 12 or 13, wherein the obtaining of the image annotation information of the target user device corresponding to the target user about the top image information comprises:
and sending the interactive position information to the target user equipment, and receiving annotation information which is transmitted by the target user equipment and returned based on the interactive position information, so as to determine the image annotation information of the top image information, wherein the image annotation information comprises the annotation information and the interactive position information, and the annotation information is determined by the first user operation of the target user.
15. The method of claim 1, wherein the computer device further comprises a distance measuring device; wherein the method further comprises:
and determining the distance information between the current user and the computer equipment through the distance measuring device, and if the distance information does not meet a preset distance threshold, sending a notification by the computer equipment.
16. A projection interaction method is applied to target user equipment, and comprises the following steps:
receiving and presenting top image information which is transmitted by corresponding computer equipment and acquired by corresponding top camera devices;
acquiring a first user operation of a target user corresponding to target user equipment, and generating annotation information about the top image information based on the first user operation;
and returning the labeling information to the computer equipment, so that the computer equipment presents the labeling information through a corresponding projection device.
17. The method of claim 16, wherein the method further comprises:
determining corresponding annotation position information based on the first user operation;
wherein, the returning the annotation information to the computer device for the computer device to present the annotation information through a corresponding projection device includes:
and returning the labeling information and the labeling position information to the computer equipment, so that the computer equipment presents the labeling information through a corresponding projection device based on the labeling position information.
18. A computer apparatus for projecting an interaction, the computer apparatus comprising a top camera and a projector, the apparatus comprising:
the one-to-one module is used for acquiring corresponding top image information through the top camera device, transmitting the top image information to corresponding target user equipment and allowing the target user equipment to present the top image information;
a second module, configured to obtain image annotation information of the target user corresponding to the target user and related to the top image information, where the image annotation information includes corresponding annotation information and annotation location information of the annotation information, and the annotation information is determined by a first user operation of the target user;
and the three modules are used for determining corresponding projection position information based on the labeling position information and displaying the labeling information in a projection mode based on the projection position information.
19. A target user device for projecting an interaction, wherein the device comprises:
the first module is used for receiving and presenting top image information which is transmitted by corresponding computer equipment and acquired by corresponding top camera devices;
a second module, configured to obtain a first user operation of a target user corresponding to a target user device, and generate annotation information related to the top image information based on the first user operation;
and the second module and the third module are used for returning the labeling information to the computer equipment so that the computer equipment presents the labeling information through a corresponding projection device.
20. A computer device, wherein the device comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the steps of the method of any one of claims 1 to 17.
21. A computer-readable storage medium having stored thereon a computer program/instructions, characterized in that the computer program/instructions, when executed, cause a system to perform the steps of performing the method according to any one of claims 1 to 17.
22. A computer program product comprising computer program/instructions, characterized in that the computer program/instructions, when executed by a processor, implement the steps of the method of any of claims 1 to 17.
CN202210726966.8A 2022-03-11 2022-06-24 Projection interaction method, device, medium and program product Pending CN115185437A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022102415579 2022-03-11
CN202210241557 2022-03-11

Publications (1)

Publication Number Publication Date
CN115185437A true CN115185437A (en) 2022-10-14

Family

ID=83514634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210726966.8A Pending CN115185437A (en) 2022-03-11 2022-06-24 Projection interaction method, device, medium and program product

Country Status (2)

Country Link
CN (1) CN115185437A (en)
WO (1) WO2023168836A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107689082A (en) * 2016-08-03 2018-02-13 腾讯科技(深圳)有限公司 A kind of data projection method and device
CN110138831A (en) * 2019-03-29 2019-08-16 亮风台(上海)信息科技有限公司 A kind of method and apparatus carrying out remote assistance
CN111752376A (en) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 Labeling system based on image acquisition
CN111757074A (en) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 Image sharing marking system
CN111988493A (en) * 2019-05-21 2020-11-24 北京小米移动软件有限公司 Interaction processing method, device, equipment and storage medium
CN112231023A (en) * 2019-07-15 2021-01-15 北京字节跳动网络技术有限公司 Information display method, device, equipment and storage medium
CN113741698A (en) * 2021-09-09 2021-12-03 亮风台(上海)信息科技有限公司 Method and equipment for determining and presenting target mark information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577788A (en) * 2012-07-19 2014-02-12 华为终端有限公司 Augmented reality realizing method and augmented reality realizing device
JP2022046059A (en) * 2020-09-10 2022-03-23 セイコーエプソン株式会社 Information generation method, information generation system, and program
CN113096003B (en) * 2021-04-02 2023-08-18 北京车和家信息技术有限公司 Labeling method, device, equipment and storage medium for multiple video frames

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107689082A (en) * 2016-08-03 2018-02-13 腾讯科技(深圳)有限公司 A kind of data projection method and device
CN110138831A (en) * 2019-03-29 2019-08-16 亮风台(上海)信息科技有限公司 A kind of method and apparatus carrying out remote assistance
CN111752376A (en) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 Labeling system based on image acquisition
CN111757074A (en) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 Image sharing marking system
CN111988493A (en) * 2019-05-21 2020-11-24 北京小米移动软件有限公司 Interaction processing method, device, equipment and storage medium
CN112231023A (en) * 2019-07-15 2021-01-15 北京字节跳动网络技术有限公司 Information display method, device, equipment and storage medium
CN113741698A (en) * 2021-09-09 2021-12-03 亮风台(上海)信息科技有限公司 Method and equipment for determining and presenting target mark information

Also Published As

Publication number Publication date
WO2023168836A1 (en) 2023-09-14

Similar Documents

Publication Publication Date Title
CN113741698B (en) Method and device for determining and presenting target mark information
US9584766B2 (en) Integrated interactive space
CN108304075B (en) Method and device for performing man-machine interaction on augmented reality device
US8957856B2 (en) Systems, methods, and apparatuses for spatial input associated with a display
Molyneaux et al. Interactive environment-aware handheld projectors for pervasive computing spaces
WO2019227905A1 (en) Method and equipment for performing remote assistance on the basis of augmented reality
US9723293B1 (en) Identifying projection surfaces in augmented reality environments
US11244511B2 (en) Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device
CN103391411A (en) Image processing apparatus, projection control method and program
US9632592B1 (en) Gesture recognition from depth and distortion analysis
US11880999B2 (en) Personalized scene image processing method, apparatus and storage medium
TW201432495A (en) User interface for augmented reality enabled devices
JP2014533347A (en) How to extend the range of laser depth map
US20150169085A1 (en) Information processing apparatus, program, information processing method, and information processing system
JP2014170511A (en) System, image projection device, information processing device, information processing method, and program
CN109582122B (en) Augmented reality information providing method and device and electronic equipment
CN104166509A (en) Non-contact screen interaction method and system
US9304582B1 (en) Object-based color detection and correction
US11561651B2 (en) Virtual paintbrush implementing method and apparatus, and computer readable storage medium
CN109656364B (en) Method and device for presenting augmented reality content on user equipment
US20160117553A1 (en) Method, device and system for realizing visual identification
CN113965665A (en) Method and equipment for determining virtual live broadcast image
CN110213407B (en) Electronic device, operation method thereof and computer storage medium
KR20150079387A (en) Illuminating a Virtual Environment With Camera Light Data
CN115185437A (en) Projection interaction method, device, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201210 7th Floor, No. 1, Lane 5005, Shenjiang Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Applicant before: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.