WO2023168836A1 - Projection interaction method, and device, medium and program product - Google Patents

Projection interaction method, and device, medium and program product Download PDF

Info

Publication number
WO2023168836A1
WO2023168836A1 PCT/CN2022/095921 CN2022095921W WO2023168836A1 WO 2023168836 A1 WO2023168836 A1 WO 2023168836A1 CN 2022095921 W CN2022095921 W CN 2022095921W WO 2023168836 A1 WO2023168836 A1 WO 2023168836A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target user
image information
annotation
computer
Prior art date
Application number
PCT/CN2022/095921
Other languages
French (fr)
Chinese (zh)
Inventor
廖春元
方中慧
杨浩
林祥杰
Original Assignee
亮风台(上海)信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 亮风台(上海)信息科技有限公司 filed Critical 亮风台(上海)信息科技有限公司
Publication of WO2023168836A1 publication Critical patent/WO2023168836A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0489Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present application relates to the field of communications, and in particular to a technology for projection interaction.
  • Augmented Reality It is a technology that calculates the position and angle of camera images in real time and adds corresponding virtual three-dimensional model animation, video, text, pictures and other digital information.
  • the goal of this technology is to display the The virtual world is nested in the real world and interacts with it.
  • Video call technology usually refers to a communication method based on the Internet and mobile Internet that transmits human voice and images in real time between smart devices.
  • Existing computer equipment lacks other interactive capabilities except video communication.
  • One purpose of this application is to provide a projection interaction method, device, medium and program product.
  • a projection interaction method is provided, wherein the method is applied to a computer device, the computer device includes a top camera device and a projection device, and the method includes:
  • image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the annotation information is composed of the The first user operation of the target user is determined;
  • Corresponding projection position information is determined based on the annotation position information, and the annotation information is projected and presented based on the projection position information.
  • a projection interaction method wherein, applied to a target user device, the method includes:
  • the annotation information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device.
  • a computer device for projection interaction includes a top camera device and a projection device.
  • the device includes:
  • a module configured to collect corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information;
  • One or two modules configured to obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, so The annotation information is determined by the first user operation of the target user;
  • a third module configured to determine corresponding projection position information based on the annotation position information, and project and present the annotation information based on the projection position information.
  • a target user device for projection interaction wherein the device includes:
  • the two-one module is used to receive and present the top image information transmitted by the corresponding computer equipment and collected by the corresponding top camera device;
  • the second module is used to obtain the first user operation of the target user device corresponding to the target user, and generate annotation information about the top image information based on the first user operation;
  • the second and third modules are used to return the annotation information to the computer device, so that the computer device can present the annotation information through a corresponding projection device.
  • a computer device wherein the device includes:
  • a memory arranged to store computer-executable instructions which, when executed, cause the processor to perform the steps of any of the methods described above.
  • a computer-readable storage medium on which a computer program/instruction is stored, characterized in that, when executed, the computer program/instruction causes the system to perform any one of the methods described above. step.
  • a computer program product including a computer program/instruction, characterized in that when the computer program/instruction is executed by a processor, the steps of any of the above methods are implemented.
  • this application projects and presents the image annotation information of the target user on the computer device through interaction between the two parties, which can provide the current user with interesting interactions while providing a more realistic and natural augmented reality interaction for the current user. In particular, it can enhance the interaction and participation experience of parent-child companionship for parents who are not with their children.
  • Figure 1 shows a system topology diagram of projection interaction according to an embodiment of the present application
  • Figure 2 shows a method flow chart of a projection interaction method according to an embodiment of the present application
  • Figure 3 shows a method flow chart of a projection interaction method according to an embodiment of the present application
  • Figure 4 shows a functional module of a computer device according to an embodiment of the present application
  • Figure 5 shows a functional module of a target user equipment according to an embodiment of the present application
  • Figure 6 illustrates an example system that may be used to implement various embodiments described in this application.
  • the terminal, the device of the service network and the trusted party all include one or more processors (for example, central processing unit (Central Processing Unit, CPU)), input/output interfaces, network interfaces and Memory.
  • Memory may include non-permanent memory in computer-readable media, random access memory (Random Access Memory, RAM) and/or non-volatile memory, such as read-only memory (Read Only Memory, ROM) or flash memory ( Flash Memory). Memory is an example of computer-readable media.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • Flash Memory Flash Memory
  • Computer-readable media includes both persistent and non-volatile, removable and non-removable media that can be implemented by any method or technology for storage of information.
  • Information may be computer-readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (Static Random-Access Memory, SRAM), Dynamic Random Access Memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (Electrically-Erasable Programmable Read) -Only Memory (EEPROM), flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage , magnetic tape cassettes, magnetic tape disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PCM Phase-Change Memory
  • the equipment referred to in this application includes but is not limited to user equipment, network equipment, or equipment composed of user equipment and network equipment integrated through a network.
  • the user equipment includes but is not limited to any kind of mobile electronic product that can interact with the user, such as smart phones, tablet computers, smart desk lamps, etc.
  • the mobile electronic product can use any operating system, such as Android operating system, iOS operating system, etc.
  • the network device includes an electronic device that can automatically perform numerical calculations and information processing according to preset or stored instructions, and its hardware includes but is not limited to microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASIC ), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), Digital Signal Processor (Digital Signal Processor, DSP), embedded devices, etc.
  • the network equipment includes but is not limited to a computer, a network host, a single network server, multiple network server sets, or a cloud composed of multiple servers; here, the cloud is composed of a large number of computers or network servers based on cloud computing (Cloud Computing), Among them, cloud computing is a type of distributed computing, a virtual supercomputer composed of a group of loosely coupled computer sets.
  • the network includes but is not limited to the Internet, wide area network, metropolitan area network, local area network, VPN network, wireless self-organizing network (Ad Hoc network), etc.
  • the device may also be a program running on the user equipment, network equipment, or a device formed by integrating the user equipment with the network equipment, the network equipment, the touch terminal, or the network equipment and the touch terminal through a network.
  • Figure 1 shows a typical scenario of this application.
  • the computer device 100 establishes a communication connection with the target user device 200.
  • the computer device 100 transmits the corresponding top image information to the target user device 200; the target user device 200 receives the top image information.
  • the corresponding annotation information or image annotation information is determined based on the top image information, and then the annotation information or image annotation information is returned to the computer device 100 .
  • the computer device 100 includes but is not limited to any electronic product that can perform human-computer interaction with the user, such as smart phones, tablet computers, smart desk lamps, smart projection devices, etc.
  • the computer equipment includes a top camera device for collecting images from above (for example, directly above or diagonally above, etc.) the operating object of the current user of the computer equipment (for example, the book that the user is currently reading or the working parts that the user is operating, etc.) Image information related to the operating object, such as camera or depth camera, etc.
  • the target user equipment includes but is not limited to any mobile electronic product that can perform human-computer interaction with the user, such as smart phones, tablet computers, personal computers, etc.
  • the target user equipment includes a display device for presenting the top Image information, for example, a liquid crystal display screen or a projector, etc.; the target user equipment also includes an input device for collecting the user's annotation information or image annotation information about the top image information.
  • the annotation information includes but is not limited to information about the top image. Mark information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks of interactive objects in the interactive object.
  • the corresponding image annotation information includes the above annotation information and the corresponding annotation position information.
  • the annotation information or the interactive object of the annotation information is at the image coordinates The image coordinate information in the system, etc., where the marked position information is only an example and is not limited here.
  • the data transmission between the target user equipment and the computer equipment in this application may be based on a direct communication connection between the two devices, or may be forwarded via a corresponding server, etc.
  • FIG. 2 shows a projection interaction method according to one aspect of the present application.
  • the method is applied to the computer device 100 and can be applied to the system topology shown in Figure 1.
  • the method includes step S101, step S102 and step S103.
  • step S101 the corresponding top image information is collected through the top camera device, and the top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information;
  • step S102 Obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the annotation information is composed of the
  • the first user operation of the target user is determined; in step S103, corresponding projection position information is determined based on the annotation position information, and the annotation information is projected and presented based on the projection position information.
  • the corresponding top image information is collected through the top camera device, and the top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information.
  • the current user for example, user A
  • the target user for example, user B
  • the computer equipment includes a top camera device, which is used to collect image information related to the operating object of the current user.
  • the top camera device collects image information related to the operating object from above the operating object (for example, directly above or diagonally).
  • the top camera device is disposed directly above the operation object, and the optical axis of the top camera device can pass through the shape or center of the operation object, etc.
  • the computer equipment is usually provided with a forward extension rod, and the top camera device is placed directly below on the extension rod to collect the image information below. Under the extension rod, the computer equipment There is an operation area corresponding to the operation object between the user and the operation area.
  • the operation area can be an extension area below the computer device itself (for example, the top image information can be obtained more accurately through the extension area of the computer device itself) , or by setting a blank area between the computer device and the user as the corresponding operation area (for example, setting a blank desktop as the operation area, etc.).
  • the base of the computer equipment remains horizontal to maintain the stability of the computer equipment.
  • the optical axis corresponding to the top camera device is vertically downward, and the surface where the operating area is located remains horizontal, thereby allowing the computer equipment to capture
  • the distances between each area of the operation object in the top image information relative to the top camera device are similar.
  • the computer device collects top image information about the operating object through the top camera device, and sends the top image information directly or via a network device to the target user device.
  • the computer device may first identify the top image information (for example, identify and track preset template features through corresponding template features), etc., thereby ensuring that the corresponding operation object exists in the top image information. If there is no operation object in the current top image information, the computer device can adjust the camera angle of the top camera device and collect images from other areas to ensure that there is an operation object in the top image information, for example, by adjusting the extension angle and height of the extension rod. Or directly adjust the camera pose information of the top camera device, thereby achieving the effect of changing the camera angle of the top camera device.
  • the computer device displays corresponding prompt information to remind the current user that there is no operation object in the current operation area. If there is an operation object in the current top image information, the top image information is transmitted to the target user device. After receiving the top image information, the target user equipment presents the top image information, for example, displays the top image information through a display screen or displays the top image information through projection through a projector.
  • the target user device receives and presents the top image information for the target user to perform operations and interactions based on the top image information. While presenting the top image information, the target user device can also collect the target user's annotation information about the interactive object in the top image information through the input device.
  • the interactive position of the interactive object may be predetermined, or may be determined and determined by the current user.
  • the transmission to the target user device may also be determined based on the target user's operation position (for example, touch position, cursor position, gesture recognition result or voice recognition result, etc.).
  • the target user device directly transmits the annotation information to the computer device; if the interaction position in the top image information is determined by the target user's operation position, then The target user device determines the interaction position as the annotation position information, combines the annotation information to generate corresponding image annotation information, and returns the image annotation information to the computer device or the like.
  • step S102 obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the The annotation information is determined by the first user operation of the target user.
  • the top image information includes an image coordinate system established based on the top image information.
  • the image coordinate system uses a certain pixel point (for example, the pixel point in the upper left corner of the image, etc.) as the origin, and uses the horizontal axis as the X-axis, The vertical axis is used as the Y axis to establish a corresponding image/pixel coordinate system, etc.
  • the corresponding annotation position information includes the coordinate position information of the corresponding annotation information or the interactive object of the annotation information in the image coordinate system.
  • the coordinate position information can be used to indicate the annotation information or Mark the center position of the interactive object of the information or the coordinate set of the area range, etc.
  • the annotation location information may be determined based on user operations of the current user and the target user, or may be preset.
  • the corresponding annotation information is determined by the target user device based on the first user operation of the target user on the top image. For example, the corresponding annotation is determined based on the mark information added by the target user on mouse input, keyboard input, touch screen input, gesture input, or voice input operation.
  • the annotation position information is determined based on the image coordinate information in the top image information based on the corresponding mouse click position, keyboard input corresponding position, touch screen touch position, gesture recognition, and voice recognition results, or based on the mouse click
  • the position, corresponding position of keyboard input, touch position of the touch screen, gesture recognition, and voice recognition results are first determined in the top image information, and then the corresponding image annotation position is determined based on the interactive object.
  • the annotation information includes but is not limited to tag information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks about interactive objects in the top image information.
  • tag information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks about interactive objects in the top image information.
  • the corresponding image There are also differences in the presentation of location information.
  • corresponding projection position information is determined based on the annotation position information, and the annotation information is projected and presented based on the projection position information.
  • the projection device is placed near the top camera device, such as the corresponding projector and the top camera are placed on the top extension rod, etc.
  • the mapping relationship between the corresponding projection device and the top camera device can be obtained through calculation.
  • s1 use the projection device to project a calibration image containing a specific pattern (such as a checkerboard) onto the operating area;
  • s2 the top camera collects images containing the operating area.
  • Video image s3: Use the image collected in step s2 to identify the coordinate information of each pattern in the display screen; s4: Establish the relationship between the pattern in the original calibration image projected by s1 and the pattern coordinates of the video image containing the operation area collected in s2 Correspondence; s5: Estimate camera internal and external parameters, as well as distortion parameters based on the two types of coordinates in s4; s6: Use s5 to obtain parameters to achieve mapping between the two images.
  • the above-mentioned calculation method of the mapping relationship between the projection device and the top camera device is only an example, and other existing or future methods for calculating the mapping relationship between the projection device and the top camera device may be applicable.
  • the annotation position information can be converted to projection position information, such as the projection coordinate information of the annotation information in the coordinate system of the projection image corresponding to the projection image, where the projection position information is only an example, No limitation is made here.
  • the computer device projects the annotation information to the corresponding operation area according to the corresponding projection coordinate information, thereby presenting the annotation information in the corresponding area in the operation object.
  • the computer device further includes a display device, and the method further includes step S104 (not shown).
  • step S104 the target image information transmitted by the target user device is received, and the display device The target image information is presented.
  • the computer device further includes a display device, which is used to present image information stored or received by the computer device, such as a liquid crystal display screen.
  • the display device is placed directly in front of or near the computer device facing the current user.
  • the target user equipment is equipped with a corresponding camera device for collecting target image information on the target user equipment side.
  • the target image The information includes image information about the target user, the target user device can transmit the target image information to the computer device, and the computer device receives and presents the target image information through the display device.
  • the target image information and the corresponding top image information are included in the video stream collected in real time, then both the computer device and the target user equipment can present the top image information, the real-time video stream corresponding to the target image information, etc.
  • the corresponding video stream not only includes images collected by the camera device, but also includes voice information collected by the voice input device; while the corresponding computer equipment and target user equipment display the real-time video stream corresponding to the target image information and the top image information, they also pass
  • the voice output device plays corresponding voice information, etc., that is, the target user equipment and the computer equipment conduct audio and video communication.
  • the computer equipment further includes a front-facing camera device, wherein the front-facing camera device is used to collect front-facing image information about the current user of the computer device; wherein the method further includes the steps S105 (not shown), in step S105, the preamble image information is transmitted to the target user equipment, so that the target user equipment can present the preamble image information.
  • the computer device further includes a front-facing camera device for collecting image information related to the current user of the computer device holder. The front-facing camera device is placed on the side of the computer device facing the current user. For example, it can be disposed on the display above the device.
  • the front-facing camera device is mainly used to collect image information related to the current user's head, and is used to realize video interactive communication between the current user and the target user.
  • the corresponding front-facing camera device collects front-facing image information about the current user when enabled, and transmits the front-facing image information to the target user device for display by the display device of the target user device.
  • the front-facing image information The information is included in the video stream collected in real time, then the target user equipment can present the real-time video stream corresponding to the front image information through the corresponding display device.
  • the enabled state of the front camera device may be enabled based on a video creation request from the target user equipment or computer device, or switched from the enabled state of the top camera device, or based on an independent enable control of the front camera device. Turn on when triggered.
  • the method further includes step S106 (not shown).
  • step S106 a camera switching request regarding the current video interaction between the computer device and the target user device is obtained, wherein the current The image information of the video interaction includes the front image information; wherein, in step S101, in response to the camera switching request, the front camera device is turned off and the top camera device is enabled, and the top camera device collects data through the top camera device.
  • the corresponding top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information. For example, only one of the front camera device and the top camera device of the computer equipment is enabled at the same time, thereby reducing the bandwidth pressure of video interaction and ensuring the efficiency and orderliness of the video interaction process.
  • the computer device is provided with a corresponding camera switching control.
  • the camera switching control may be a physical button provided on the computer device or a virtual control presented on the current screen.
  • the camera switching control is used to Realize switching from the enabled state of the front camera device to the enabled state of the top camera device.
  • the camera switching control is also used to adjust the computer device from the enabled state of the top camera device to the enabled state of the front camera device, In other words, the camera switching control is used to switch the activation state of the top camera device and the front camera device; in other cases, the camera switching control is only used to switch from the activation state of the front camera device to the top camera device.
  • the computer device is also provided with a corresponding camera restore control, and the camera restore control is used to adjust the computer device from the enabled state of the top camera device to the enabled state of the front camera device.
  • the computer device determines the camera switching request by recognizing the current user's interactive input operations such as gestures, voice, and head movements.
  • the target user device may also be provided with a corresponding camera switching control, or the target user device determines the camera switching request by recognizing the target user's gestures, voice, head movements and other interactive input operations. , which will not be described in detail here.
  • the computer device is in a state of collecting front image information during the video interaction process.
  • the computer device When the user (for example, the current user or the target user) touches the camera switching control, the computer device turns off the front camera device and enables the top camera.
  • a device that collects corresponding top image information through the top camera device, and transmits the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information.
  • the target user can perform a first user operation on the top image information to determine the icon annotation information of the top image information.
  • the computer device determines the projection position information based on the annotation position information, and projects and presents the annotation information based on the projection position information.
  • the target Users can provide intuitive guidance on the current user's operation objects.
  • the activation status of different camera devices corresponds to different interaction modes. For example, when the computer device currently only turns on the front camera device, the computer device is in the video communication mode, which is used to achieve video communication between the current user and the target user, etc. ; When the computer device currently only turns on the top camera device, the computer device is in the video guidance mode, which is used to achieve communication guidance for the target user about the current user's operation object; when the computer device currently turns on both the front camera device and the top camera device, the computer device The device not only enables the target user to communicate and guide the current user's operation object, but also facilitates video communication between the target user and the current user.
  • step S105 receive a camera switching request transmitted by the target user equipment regarding the current video interaction between the computer device and the target user equipment, wherein the camera switching request is based on the target user equipment.
  • the second user operation is determined.
  • the target user device may generate a video image of the current video interaction based on the target user's second user operation (for example, a trigger operation on the camera switching control, or an interactive input operation instruction on the camera switch, etc.).
  • Switching request the target user equipment transmits the camera switching request to the corresponding computer device; or, the target user equipment sends the second user operation to the corresponding server, and the server generates the corresponding camera switching request based on the second user operation, and sends the camera switching request to the corresponding computer device.
  • Handover requests are sent to computer equipment, etc.
  • the first user operation and the second user operation are only used to distinguish the corresponding functions of the user operations, and do not involve the correlation of the order, size and other operations.
  • the target user equipment obtains and presents the front image information of the front camera device
  • the corresponding camera switching control is presented in the current screen
  • the target user equipment can collect the second user's information by Operation, such as the target user's touch operation on the camera switching control, determines the camera switching request.
  • the target user device can collect the second user operation, such as the target user's gesture, voice, head movement, etc.
  • the camera switching request usually occurs after the video is established.
  • the corresponding front-facing camera device can be enabled simultaneously based on the video creation request, or the front-facing camera device can be enabled based on the interaction between the two parties after the target user device and the computer device establish communication.
  • the camera device, etc. can also be switched from the enabled state of the top camera device to the enabled state of the front camera device after the target user equipment establishes communication with the computer device and activates the top camera device.
  • step S105 obtain a video establishment request regarding the current video interaction; in response to the video establishment request, establish all connections between the computer device and the target user device based on the video establishment request.
  • the corresponding front image information is collected through the front camera device and transmitted to the target user equipment, so that the target user equipment can present the front image information.
  • the current video interaction may be initiated based on a video creation request determined by the user operation of the current user or the target user.
  • the video stream is transmitted between the target user device and the computer device.
  • the computer device transmits a video stream to the computer device.
  • the target user device transmits the corresponding pre-image information, and the target user device transmits the corresponding target image information to the computer device, or only the computer device transmits the pre-image information to the target user device alone, etc.
  • the video creation request may be determined based on the current user's initiated operation on the computer device (for example, a triggering operation on a video creation control, or an interactive input operation instruction on video creation, etc.), and the video creation request includes a user identification corresponding to the target user.
  • user identification information includes but is not limited to unique tag information used to identify the target user, specifically, such as name, image, ID card, mobile phone number, application serial number or device access control address information, etc.; the computer device can The video creation request is sent to the network device for the network device to forward to the target user device and establish video interaction between the two, or the computer device directly sends the video creation request to the target user device and establishes video interaction between the two.
  • the video creation request may be determined based on the target user's initiating operation on the target user device (for example, a triggering operation on the video creation control, or an interactive input operation instruction on the video creation, etc.), and the video creation request includes information corresponding to the current User identification information of the user; the target user device can send the video creation request to the network device for the network device to forward to the computer device and establish video interaction between the two, or the target user device can directly send the video creation request to the computer device And establish video interaction between the two.
  • the target user device can send the video creation request to the network device for the network device to forward to the computer device and establish video interaction between the two, or the target user device can directly send the video creation request to the computer device And establish video interaction between the two.
  • the method further includes step S107 (not shown).
  • step S107 obtain a camera restoration request regarding the current video interaction between the computer device and the target user device; respond to the camera restoration request , turn off the top camera device and enable the front camera device, collect corresponding front image information through the front camera device, and transmit the front image information to the corresponding target user equipment.
  • the camera restoration request is used to adjust the computer device from the enabled state of the top camera device to the enabled state of the front camera device.
  • the corresponding top camera device After the restored state, the corresponding top camera device is in a closed state, and only the front image information is collected and Transmission, such as the computer device transmitting the corresponding pre-image information to the target user device, the target user device transmitting the corresponding target image information to the computer device, or only the computer device transmitting the pre-image information to the target user device alone, etc.
  • the camera restoration request is initiated based on the current user/target user's touch operation on the camera restoration control. In other embodiments, the camera restoration request is based on the current user/target user's touch operation on the camera restoration.
  • the computer device Initiated by interactively inputting operation instructions (such as gestures, voice, head movements, etc.), based on the camera restoration request, the computer device turns off the top camera device that is currently enabled for video interaction, and enables the corresponding front camera device, thereby achieving the goal of returning the camera to the camera.
  • operation instructions such as gestures, voice, head movements, etc.
  • the method further includes step S108 (not shown).
  • step S108 a camera start request regarding video interaction between the computer device and the target user device is obtained; wherein, in step S101, in response to In response to the camera startup request, the corresponding top image information is collected through the top camera device, and the top image information is transmitted to the corresponding target user equipment; in step S105, in response to the camera startup request, through the The front camera device collects corresponding front image information and transmits the front image information to the target user equipment.
  • the camera start request is used to enable the front camera device and the top camera device at the same time.
  • the camera start request can be included in the video creation request to enable the front camera device and the top camera device at the same time during the video creation process.
  • the closing controls for the front camera device and the top camera device for the current user and/or the target user to close the front camera device or the top camera device are closed at the same time, device, turns off video interaction between the computer device and the target user device.
  • the camera start request may also be to call the front camera device or the top camera during the video interaction process (during the video interaction process, one of the front camera device and the top camera device is enabled). device to achieve the effect of two camera devices being enabled at the same time.
  • the camera startup request may be generated based on the touch operation of the current user or the target user on opening the control.
  • the camera startup request may be generated based on the current user or the target user's touch operation on the camera.
  • the interactive input operation instructions (such as gestures, voice, head movements, etc.) to start are generated, and the two camera devices are enabled simultaneously based on the response of the computer device or the target user device to the camera start request.
  • the computer device includes a lighting device; wherein the method further includes step S109 (not shown).
  • step S109 if an enabling request for the lighting device is obtained, turning on the lighting device.
  • the computer equipment may include a lighting device for adjusting the brightness of the operating area.
  • the activation request is used to turn on the corresponding lighting device and project a certain intensity of light on the operating area to change the brightness of the environment.
  • the activation request can be generated by the computer device based on the current user's operation, or it can also be based on the target user's operation corresponding to the target user's equipment. (such as touch controls on lighting controls, etc.) are generated and transmitted to computer equipment, etc.
  • the enable request is included in the corresponding video creation request, which is used to turn on the lighting device while the video interaction is being established; in other cases, the enable request is based on the target user/current user during the video interaction. Or the user operation is determined when there is no video interaction, thereby turning on the lighting device to adjust the ambient brightness.
  • the computer device further includes an ambient light detection device
  • the method further includes step S110 (not shown).
  • step S110 the illumination intensity information of the current environment is obtained based on the ambient light detection device, Detect whether the light intensity information meets the preset lighting threshold; if not, adjust the lighting device until the light intensity information meets the preset lighting threshold.
  • the lighting device of the computer equipment includes a lighting device with adjustable brightness.
  • the lighting brightness of the lighting device can be adjusted based on the touch selection operation of the brightness adjustment control by the current user or the target user.
  • the lighting brightness of the lighting device can be adjusted based on the current user's or target user's touch selection operation on the brightness adjustment control.
  • the user or the target user interactively inputs operation instructions (such as gestures, voice, head movements, etc.) regarding brightness adjustment to adjust the lighting brightness of the lighting device.
  • the computer equipment includes an ambient light detection device, which is used to cooperate with the lighting device to realize automatic adjustment of lighting and ensure the controllability and applicability of ambient brightness.
  • the computer equipment measures the light intensity information of the current environment based on the ambient light detection device, and compares the light intensity information with a preset lighting threshold.
  • the preset lighting threshold can be a specific light intensity value or multiple light intensities. The range of numerical values, etc., is not limited here.
  • the illumination intensity information is the same as the illumination threshold information or the intensity difference is less than the preset difference threshold, etc., then it is determined that the illumination intensity information satisfies the preset illumination threshold; or, if the illumination intensity information is within the illumination threshold interval, then It is determined that the light intensity information satisfies a preset lighting threshold, etc. If not, the corresponding lighting adjustment information is calculated based on the lighting intensity information and the lighting threshold information, and the corresponding lighting device is adjusted based on the lighting adjustment information.
  • the lighting adjustment information includes an increasing or decreasing lighting adjustment value corresponding to the current lighting intensity information. wait.
  • the method further includes step S111 (not shown).
  • step S111 determine the image brightness information corresponding to the top image information based on the top image information, and detect whether the image brightness information meets a predetermined Set a brightness threshold; if it is not met, adjust the lighting device until the image brightness information meets the preset brightness threshold.
  • the computer device can also determine the corresponding image brightness information based on the top image information, for example, calculate the average brightness information of the current image information based on the pixel brightness information of part (for example, sampling some pixels, etc.) or all pixels in the top image information. The image brightness information is compared with a preset brightness threshold.
  • the preset brightness threshold can be a specific image brightness value, or an interval composed of multiple image brightness values, etc., which is not limited here. If the image brightness information is the same as the brightness threshold information or the brightness difference is less than the preset difference threshold, etc., then it is determined that the image brightness information satisfies the preset brightness threshold; or if the image brightness information is within the brightness threshold interval, then It is determined that the image brightness information satisfies a preset brightness threshold, etc. If not, the corresponding lighting adjustment information is calculated based on the image brightness information and the brightness threshold information, and the corresponding lighting device is adjusted based on the lighting adjustment information.
  • the lighting adjustment information includes the lighting adjustment value corresponding to the increase or decrease of the current lighting intensity information. wait.
  • the computer device can also adjust the brightness of the lighting device based on the image brightness information of a specific area of the top image information.
  • the specific area can be an interaction area determined based on the interactive object (for example, a boundary area, a circumscribed rectangular area etc.) or determined from the top image information based on user operations of the target user/current user.
  • the method further includes step S112 (not shown).
  • step S112 the interactive position information of the current user with respect to the top image information is obtained; and the corresponding virtual position information is determined based on the interactive position information.
  • the interactive position information about the interactive object in the top image information can be obtained based on the user operation of the current user.
  • the current user presents the corresponding top image information on the display device, and based on the user's frame selection , click or touch operations, determine one or more pixel positions or pixel areas from the top image information, and use the one or more pixel positions or pixel areas as corresponding interactive position information.
  • the interactive position information is included in the top image information. Coordinate position information in the image coordinate system, etc. For another example, based on the current user's user operation on the operation object (for example, pointing a finger or pen tip to a certain position, etc.), the computer device determines the pointed position of the finger or pen through image recognition technology, and uses the pointed position as the corresponding interactive position information. .
  • the computer device can directly present the corresponding virtual presentation information based on the interactive position information. For example, the computer device matches the virtual information corresponding to the interactive position information from the database, or for example, by performing target recognition on the interactive object corresponding to the interactive position information, thereby Match the corresponding virtual information in the database, determine the corresponding virtual information as virtual presentation information, and determine the projection position information of the virtual presentation information through the interactive position information, so as to project the virtual presentation information to the interactive object through the projection device Spatial location, etc.
  • the computer device includes an infrared measurement device; wherein, obtaining the interactive position information of the current user with respect to the top image information includes: determining the top position through the infrared measurement device. Interactive position information of image information.
  • the computer equipment also includes an infrared measurement device.
  • the infrared measurement device includes an infrared camera and an infrared emitter.
  • the infrared camera is installed on the top extension pole together with the top camera, and the infrared emitter is installed on On the base of the computer equipment, the infrared emitter forms an invisible light film on the surface of the operating object that is higher than a certain distance threshold on the surface.
  • the light is reflected to the infrared camera, and then passes through the photoelectric position. Accurate calculation to obtain the position of the finger or any opaque object touching the operating object.
  • the infrared measurement device includes an infrared camera and an infrared pen.
  • the infrared camera can determine the position where the infrared pen contacts the operating object. Determine the corresponding interactive position information based on the position of the finger, any opaque object or the infrared pen touching the operating object.
  • the interaction location information is sent to the target user equipment, and the annotation information transmitted by the target user equipment and returned based on the interaction location information is received, thereby determining the top Image annotation information of image information, wherein the image annotation information includes the annotation information and the interactive position information, and the annotation information is determined by the first user operation of the target user.
  • the interactive location information is used to prompt the target user to the area where the interactive object is currently located in the top image information.
  • the target user device receives the top image information transmitted by the computer device and also receives the interactive location information, and collects the target user's information about the interactive location information.
  • the interactive position information corresponds to the user operation of the interactive object, so that the corresponding annotation information is determined based on the first user operation.
  • the target user device directly returns the annotation information to the computer device, and the computer device implements projection and presentation of the annotation information based on the received annotation information and combined with the previously determined interactive position information.
  • the computer device further includes a distance measurement device; wherein the method further includes step S113 (not shown).
  • step S113 it is determined by the distance measurement device that the distance between the current user and the Distance information between computer devices. If the distance information does not meet a preset distance threshold, the computer device sends a notification.
  • the distance measurement device is disposed on a side of the computer device parallel to the corresponding operating area and facing the current user, and is used to measure real-time distance information between the computer device and the current user, such as a laser rangefinder.
  • the computer device is provided with a corresponding distance threshold interval.
  • the computer device When the distance information between the computer device and the current user is within the distance threshold interval, it is determined that the distance information satisfies the preset distance threshold and the current user's posture meets the requirements. If the distance information between the computer device and the current user is outside the distance threshold interval, it is determined that the distance information does not meet the preset distance threshold, and the computer device issues a corresponding prompt notification, such as reminding the current user through sound, image, vibration, text, etc. The posture needs to be adjusted to ensure that the corresponding distance information meets the preset distance threshold.
  • Figure 3 shows a projection interaction method according to one aspect of the present application, wherein the method is applicable to the system shown in Figure 1 and applied to the target user equipment 200, and mainly includes steps S201, S202 and S203.
  • step S201 receive and present the top image information transmitted by the corresponding computer device and collected by the corresponding top camera device; in step S202, obtain the first user operation of the target user corresponding to the target user equipment, and based on the first user operation Generate annotation information about the top image information; in step S203, return the annotation information to the computer device for the computer device to present the annotation information through a corresponding projection device.
  • the current user for example, user A
  • the computer device through which the computer device can communicate with the target user device held by the target user (for example, user B), such as establishing a connection between the computer device and the target through a wired or wireless manner.
  • the computer equipment includes a top camera device, which is used to collect image information related to the operating object of the current user.
  • the top camera device collects image information related to the operating object from above the operating object (for example, directly above or diagonally). above, etc.), in some cases, the top camera device is disposed directly above the operation object, and the optical axis of the top camera device can pass through the shape or center of the operation object, etc.
  • the computer equipment is usually provided with a forward extension rod, and the top camera device is placed directly below on the extension rod to collect the image information below.
  • the computer equipment There is an operation area corresponding to the operation object between the user and the operation area.
  • the operation area can be an extension area below the computer device itself (for example, the top image information can be obtained more accurately through the extension area of the computer device itself) , or by setting a blank area between the computer device and the user as the corresponding operation area (for example, setting a blank desktop as the operation area, etc.).
  • the base of the computer equipment remains horizontal to maintain the stability of the computer equipment.
  • the optical axis corresponding to the top camera device is vertically downward, and the surface where the operating area is located remains horizontal, thereby allowing the computer equipment to capture
  • the distances between each area of the operation object in the top image information relative to the top camera device are similar.
  • the computer device collects top image information about the operating object through the top camera device, and sends the top image information directly or via a network device to the target user device.
  • the computer device may first identify the top image information (for example, identify and track preset template features through corresponding template features), etc., thereby ensuring that the corresponding operation object exists in the top image information. If there is no operation object in the current top image information, the computer device can adjust the camera angle of the top camera device and collect images from other areas to ensure that there is an operation object in the top image information, for example, by adjusting the extension angle and height of the extension rod. Or directly adjust the camera pose information of the top camera device, thereby achieving the effect of changing the camera angle of the top camera device.
  • the computer device displays corresponding prompt information to remind the current user that there is no operation object in the current operation area. If there is an operation object in the current top image information, the top image information is transmitted to the target user device. After receiving the top image information, the target user equipment presents the top image information, for example, displays the top image information through a display screen or displays the top image information through projection through a projector.
  • the target user device receives and presents the top image information for the target user to perform operations and interactions based on the top image information. While presenting the top image information, the target user device can also collect the target user's annotation information about the interactive object in the top image information through the input device.
  • the interactive position of the interactive object may be predetermined, or may be determined and determined by the current user.
  • the transmission to the target user device may also be determined based on the target user's operation position (for example, touch position, cursor position, gesture recognition result or voice recognition result, etc.).
  • the target user device directly transmits the annotation information to the computer device; if the interaction position in the top image information is determined by the target user's operation position, then The target user device determines the interaction position as the annotation position information, combines the annotation information to generate corresponding image annotation information, and returns the image annotation information to the computer device or the like.
  • the method further includes step S204 (not shown).
  • step S204 corresponding annotation location information is determined based on the first user operation; wherein, in step S203, the annotation information is and the annotation location information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device based on the annotation location information.
  • the top image information includes an image coordinate system established based on the top image information.
  • the image coordinate system uses a certain pixel point (for example, the pixel point in the upper left corner of the image, etc.) as the origin, and uses the horizontal axis as the X-axis, The vertical axis is used as the Y axis to establish a corresponding image/pixel coordinate system, etc.
  • the corresponding annotation position information includes the coordinate position information of the corresponding annotation information or the interactive object of the annotation information in the image coordinate system.
  • the coordinate position information can be used to indicate the annotation information or Mark the center position of the interactive object of the information or the coordinate set of the area range, etc.
  • the annotation location information may be determined based on user operations of the current user and the target user, or may be preset.
  • the corresponding annotation information is determined by the target user device based on the first user operation of the target user on the top image. For example, the corresponding annotation is determined based on the mark information added by the target user on mouse input, keyboard input, touch screen input, gesture input, or voice input operation. Information; in some cases, the annotation position information is determined based on the image coordinate information in the top image information based on the corresponding mouse click position, keyboard input corresponding position, touch screen touch position, gesture recognition, and voice recognition results, or based on the mouse click The position, corresponding position of keyboard input, touch position of the touch screen, gesture recognition, and voice recognition results are first determined in the top image information, and then the corresponding image annotation position is determined based on the interactive object.
  • the annotation information includes but is not limited to tag information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks about interactive objects in the top image information.
  • tag information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks about interactive objects in the top image information.
  • the corresponding image There are also differences in the presentation of location information.
  • the projection device is placed near the top camera device, for example, the corresponding projector and the top camera are placed on the top extension rod.
  • the mapping relationship between the corresponding projection device and the top camera device can be obtained through calculation.
  • the annotation position information can be converted to projection position information.
  • the annotation information is in the coordinate system of the projection image corresponding to the projection image.
  • the projection coordinate information in , where the projection position information is only an example and is not limited here.
  • the computer device projects the annotation information to the corresponding operation area according to the corresponding projection coordinate information, thereby presenting the annotation information in the corresponding area in the operation object.
  • Figure 4 shows a projection interactive computer device 100 according to one aspect of the present application.
  • the device includes a first module 101, a second module 102 and a third module 103.
  • Module 101 configured to collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information;
  • 1. 2 modules 102. Used to obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the annotation The information is determined by the first user operation of the target user;
  • a third module 103 is configured to determine corresponding projection position information based on the annotation position information, and project and present the annotation information based on the projection position information.
  • the computer device further includes a display device, and the device further includes a four-module (not shown) for receiving the target image information transmitted by the target user device, and presenting the information through the display device. Describe the target image information.
  • the computer device further includes a front-facing camera device, wherein the front-facing camera device is used to collect front-facing image information about the current user of the computer device; wherein the device further includes a Five modules (not shown), configured to transmit the pre-image information to the target user equipment so that the target user equipment can present the pre-image information.
  • a front-facing camera device wherein the front-facing camera device is used to collect front-facing image information about the current user of the computer device
  • the device further includes a Five modules (not shown), configured to transmit the pre-image information to the target user equipment so that the target user equipment can present the pre-image information.
  • the device further includes a module (not shown) for obtaining a camera switching request regarding a current video interaction between the computer device and the target user device, wherein the current video interaction
  • the image information includes the front image information; wherein, a module 101 is used to respond to the camera switching request, turn off the front camera device and enable the top camera device, and collect data through the top camera device
  • the corresponding top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information.
  • a module is configured to receive a camera switching request transmitted by the target user equipment regarding the current video interaction between the computer device and the target user equipment, wherein the camera switching request is based on the target user equipment. The user's second user operation is determined.
  • a module is configured to obtain a video establishment request regarding the current video interaction; in response to the video establishment request, establish a connection between the computer device and the target user equipment based on the video establishment request.
  • the corresponding front image information is collected through the front camera device and transmitted to the target user equipment, so that the target user equipment can present the front image information.
  • the device further includes a module (not shown) for obtaining a camera restoration request regarding the current video interaction between the computer device and the target user device; in response to the camera restore request, close The top camera images and activates the front camera device, collects corresponding front image information through the front camera device, and transmits the front image information to the corresponding target user equipment.
  • the device further includes a module (not shown) for obtaining a camera start request regarding the video interaction between the computer device and the target user device; wherein, a module 101 is used to respond In response to the camera start request, collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment; a fifth module, configured to respond to the camera start request, through the The front camera device collects corresponding front image information and transmits the front image information to the target user equipment.
  • a module 101 is used to respond In response to the camera start request, collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment
  • a fifth module configured to respond to the camera start request, through the The front camera device collects corresponding front image information and transmits the front image information to the target user equipment.
  • the computer device includes a lighting device; wherein the device further includes a module (not shown) configured to turn on the lighting device if an enabling request for the lighting device is obtained.
  • the computer device further includes an ambient light detection device, and the device further includes a module (not shown) for obtaining light intensity information of the current environment based on the ambient light detection device, and detecting the Whether the illumination intensity information satisfies the preset illumination threshold; if not, adjust the lighting device until the illumination intensity information satisfies the preset illumination threshold.
  • the device further includes an eleven module (not shown) for determining image brightness information corresponding to the top image information based on the top image information, and detecting whether the image brightness information satisfies a preset The brightness threshold; if it is not met, adjust the lighting device until the image brightness information meets the preset brightness threshold.
  • an eleven module (not shown) for determining image brightness information corresponding to the top image information based on the top image information, and detecting whether the image brightness information satisfies a preset The brightness threshold; if it is not met, adjust the lighting device until the image brightness information meets the preset brightness threshold.
  • the device further includes a module (not shown) for obtaining the interactive position information of the current user regarding the top image information; and determining the corresponding virtual presentation based on the interactive position information. information, and the virtual presentation information is projected and presented through the projection device.
  • the computer device includes an infrared measurement device; wherein the obtaining the interactive position information of the current user with respect to the top image information includes: determining the position of the top image information through the infrared measurement device. Interactive location information.
  • the first and second modules 102 are configured to send the interactive location information to the target user equipment, receive the annotation information transmitted by the target user equipment and returned based on the interactive location information, thereby determining the The image annotation information of the top image information, wherein the image annotation information includes the annotation information and the interactive position information, and the annotation information is determined by the first user operation of the target user.
  • the computer device further includes a distance measurement device; wherein the device further includes a thirteenth module (not shown) for determining the distance between the current user and the computer through the distance measurement device. Distance information between devices. If the distance information does not meet the preset distance threshold, the computer device sends a notification.
  • FIG. 5 shows a target user device 200 for projection interaction according to one aspect of the present application, which mainly includes a two-one module 201, a two-two module 202 and a two-three module 203.
  • the second module 201 is used to receive and present the top image information transmitted by the corresponding computer device and collected by the corresponding top camera;
  • the second module 202 is used to obtain the first user operation of the target user corresponding to the target user equipment, based on the
  • the first user operation generates annotation information about the top image information;
  • the second and third modules 203 are used to return the annotation information to the computer device for the computer device to present the annotation information through the corresponding projection device.
  • step S201, step S202 and step S203 shown in FIG. 3 are the same or similar to the embodiments of step S201, step S202 and step S203 shown in FIG. 3 , and therefore will not be described again, but is included here by reference.
  • the device further includes a second and fourth module (not shown), used to determine the corresponding annotation location information based on the first user operation; wherein the second and third module 203 is used to combine the annotation information and The annotation location information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device based on the annotation location information.
  • a second and fourth module (not shown), used to determine the corresponding annotation location information based on the first user operation; wherein the second and third module 203 is used to combine the annotation information and The annotation location information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device based on the annotation location information.
  • step S204 the specific implementation manner corresponding to the two or four modules is the same as or similar to the foregoing embodiment of step S204, and therefore will not be described again, but is included here by reference.
  • the present application also provides a computer-readable storage medium that stores computer code.
  • the computer code is executed, as in the previous item The method described is executed.
  • This application also provides a computer program product.
  • the computer program product is executed by a computer device, the method described in the previous item is executed.
  • This application also provides a computer device, which includes:
  • processors one or more processors
  • Memory for storing one or more computer programs
  • the one or more processors When the one or more computer programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any one of the preceding items.
  • FIG. 6 illustrates an exemplary system that may be used to implement various embodiments described in this application
  • system 300 can serve as any of the above-mentioned devices in each of the described embodiments.
  • system 300 may include one or more computer-readable media (eg, system memory or NVM/storage device 320) having instructions coupled thereto and configured to perform Instructions are provided to implement means for one or more processors (eg, processor(s) 305) to perform the actions described herein.
  • processors eg, processor(s) 305
  • system control module 310 may include any suitable interface controller to provide information to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310 Any appropriate interface.
  • System control module 310 may include a memory controller module 330 to provide an interface to system memory 315 .
  • Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
  • System memory 315 may be used, for example, to load and store data and/or instructions for system 300 .
  • system memory 315 may include any suitable volatile memory, such as suitable DRAM.
  • system memory 315 may include double data rate type quad synchronous dynamic random access memory (DDR4SDRAM).
  • DDR4SDRAM double data rate type quad synchronous dynamic random access memory
  • system control module 310 may include one or more input/output (I/O) controllers to provide interfaces to NVM/storage device 320 and communication interface(s) 325 .
  • I/O input/output
  • NVM/storage device 320 may be used to store data and/or instructions.
  • NVM/storage device 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard drives (e.g., one or more hard drives) HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • suitable non-volatile memory e.g., flash memory
  • suitable non-volatile storage device(s) e.g., one or more hard drives (e.g., one or more hard drives) HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives.
  • NVM/storage device 320 may include storage resources that are physically part of the device on which system 300 is installed, or that may be accessed by the device without necessarily being part of the device. For example, NVM/storage device 320 may be accessed over the network via communication interface(s) 325 .
  • Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device.
  • System 300 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
  • At least one of the processor(s) 305 may be packaged with the logic of one or more controllers of the system control module 310 (eg, memory controller module 330). For one embodiment, at least one of the processor(s) 305 may be packaged together with the logic of one or more controllers of the system control module 310 to form a system-in-package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die as the logic of the one or more controllers of the system control module 310 . For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with the logic of one or more controllers of the system control module 310 to form a system on a chip (SoC).
  • SoC system on a chip
  • system 300 may be, but is not limited to, a server, a workstation, a desktop computing device, or a mobile computing device (eg, laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or a different architecture. For example, in some embodiments, system 300 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuits
  • the present application may be implemented in software and/or a combination of software and hardware, for example, using an application specific integrated circuit (ASIC), a general purpose computer or any other similar hardware device.
  • the software program of the present application can be executed by a processor to implement the steps or functions described above.
  • the software program of the present application (including related data structures) may be stored in a computer-readable recording medium, such as a RAM memory, a magnetic or optical drive or a floppy disk and similar devices.
  • some steps or functions of the present application may be implemented using hardware, for example, as a circuit that cooperates with a processor to perform each step or function.
  • part of the present application may be applied as a computer program product, such as computer program instructions.
  • a computer program product such as computer program instructions.
  • methods and/or technical solutions according to the present application may be invoked or provided.
  • the form in which computer program instructions exist in a computer-readable medium includes but is not limited to source files, executable files, installation package files, etc.
  • the manner in which computer program instructions are executed by a computer includes but is not limited to Limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. program.
  • the computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by the computer.
  • Communication media includes the medium whereby communication signals containing, for example, computer readable instructions, data structures, program modules or other data are transmitted from one system to another system.
  • Communication media may include conducted transmission media, such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (unguided transmission) media that can propagate energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared .
  • Computer readable instructions, data structures, program modules, or other data may be embodied, for example, as a modulated data signal in a wireless medium, such as a carrier wave or a similar mechanism such as that embodied as part of spread spectrum technology.
  • modulated data signal refers to a signal in which one or more characteristics are altered or set in a manner that encodes information in the signal. Modulation can be analog, digital or hybrid modulation techniques.
  • computer-readable storage media may include volatile and nonvolatile, removable, storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Removable and non-removable media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM ,EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other media now known or developed in the future that can be stored for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM, SRAM
  • non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM ,EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks,
  • one embodiment according to the present application includes a device, the device includes a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, triggering
  • the device operates based on the aforementioned methods and/or technical solutions according to multiple embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present application aims to provide a projection interaction method, and a device, a medium and a program product. The method specifically comprises: collecting corresponding top image information by means of a top camera apparatus, and transmitting the top image information to a corresponding target user equipment, so that the target user equipment presents the top image information; acquiring image annotation information, about the top image information, of a target user corresponding to the target user equipment, wherein the image annotation information comprises corresponding annotation information and annotation position information of the annotation information, and the annotation information is determined by a first user operation of the target user; and determining corresponding projection position information on the basis of the annotation position information, and projecting and presenting the annotation information on the basis of the projection position information. By means of the present application, an interesting interaction can be performed for a current user, and a more real and natural augmented reality interaction can also be provided for the current user.

Description

一种投影交互方法、设备、介质及程序产品A projection interaction method, equipment, medium and program product
本申请是以CN申请号为202210241557.9,申请日为2022.03.11的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。This application is based on the application with CN application number 202210241557.9 and a filing date of 2022.03.11, and claims its priority. The disclosure content of the CN application is hereby incorporated into this application as a whole.
技术领域Technical field
本申请涉及通信领域,尤其涉及一种用于投影交互的技术。The present application relates to the field of communications, and in particular to a technology for projection interaction.
背景技术Background technique
增强现实技术(Augmented Reality):是一种实时地计算摄影机影像的位置及角度并加上相应虚拟三维模型动画、视频、文字、图片等数字信息的技术,这种技术的目标是在屏幕上把虚拟世界套在现实世界并进行互动。视频通话技术通常指基于互联网和移动互联网端,通过智能设备之间实时传送人的语音和图像的一种通信方式。现有的计算机设备除视频通讯外,缺少其它互动能力。Augmented Reality: It is a technology that calculates the position and angle of camera images in real time and adds corresponding virtual three-dimensional model animation, video, text, pictures and other digital information. The goal of this technology is to display the The virtual world is nested in the real world and interacts with it. Video call technology usually refers to a communication method based on the Internet and mobile Internet that transmits human voice and images in real time between smart devices. Existing computer equipment lacks other interactive capabilities except video communication.
发明内容Contents of the invention
本申请的一个目的是提供一种投影交互方法、设备、介质及程序产品。One purpose of this application is to provide a projection interaction method, device, medium and program product.
根据本申请的一个方面,提供了一种投影交互方法,其中,该方法应用于计算机设备,该计算机设备包括顶部摄像装置以及投影装置,该方法包括:According to one aspect of the present application, a projection interaction method is provided, wherein the method is applied to a computer device, the computer device includes a top camera device and a projection device, and the method includes:
通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息;Collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information;
获取所述目标用户设备对应目标用户的、关于所述顶部图像信息的图像标注信息,其中,所述图像标注信息包括对应标注信息及所述标注信息的标注位置信息,所述标注信息由所述目标用户的第一用户操作确定;Obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the annotation information is composed of the The first user operation of the target user is determined;
基于所述标注位置信息确定对应投影位置信息,并基于所述投影位置信息投影呈现所述标注信息。Corresponding projection position information is determined based on the annotation position information, and the annotation information is projected and presented based on the projection position information.
根据本申请的另一个方面,提供了一种投影交互方法,其中,应用于目标用户设备,该方法包括:According to another aspect of the present application, a projection interaction method is provided, wherein, applied to a target user device, the method includes:
接收并呈现对应计算机设备传输的、通过对应顶部摄像装置采集的顶部图像信息,Receive and present the top image information transmitted by the corresponding computer equipment and collected by the corresponding top camera device,
获取目标用户设备对应目标用户的第一用户操作,基于所述第一用户操作生成 关于所述顶部图像信息的标注信息;Obtain the first user operation of the target user device corresponding to the target user, and generate annotation information about the top image information based on the first user operation;
将所述标注信息返回至所述计算机设备,供所述计算机设备通过对应投影装置呈现所述标注信息。The annotation information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device.
根据本申请的一个方面,提供了一种投影交互的计算机设备,该计算机设备包括顶部摄像装置以及投影装置,该设备包括:According to one aspect of the present application, a computer device for projection interaction is provided. The computer device includes a top camera device and a projection device. The device includes:
一一模块,用于通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息;A module configured to collect corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information;
一二模块,用于获取所述目标用户设备对应目标用户的、关于所述顶部图像信息的图像标注信息,其中,所述图像标注信息包括对应标注信息及所述标注信息的标注位置信息,所述标注信息由所述目标用户的第一用户操作确定;One or two modules, configured to obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, so The annotation information is determined by the first user operation of the target user;
一三模块,用于基于所述标注位置信息确定对应投影位置信息,并基于所述投影位置信息投影呈现所述标注信息。A third module, configured to determine corresponding projection position information based on the annotation position information, and project and present the annotation information based on the projection position information.
根据本申请的另一个方面,提供了一种投影交互的目标用户设备,其中,该设备包括:According to another aspect of the present application, a target user device for projection interaction is provided, wherein the device includes:
二一模块,用于接收并呈现对应计算机设备传输的、通过对应顶部摄像装置采集的顶部图像信息;The two-one module is used to receive and present the top image information transmitted by the corresponding computer equipment and collected by the corresponding top camera device;
二二模块,用于获取目标用户设备对应目标用户的第一用户操作,基于所述第一用户操作生成关于所述顶部图像信息的标注信息;The second module is used to obtain the first user operation of the target user device corresponding to the target user, and generate annotation information about the top image information based on the first user operation;
二三模块,用于将所述标注信息返回至所述计算机设备,供所述计算机设备通过对应投影装置呈现所述标注信息。The second and third modules are used to return the annotation information to the computer device, so that the computer device can present the annotation information through a corresponding projection device.
根据本申请的一个方面,提供了一种计算机设备,其中,该设备包括:According to one aspect of the present application, a computer device is provided, wherein the device includes:
处理器;以及processor; and
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行如上任一所述方法的步骤。A memory arranged to store computer-executable instructions which, when executed, cause the processor to perform the steps of any of the methods described above.
根据本申请的一个方面,提供了一种计算机可读存储介质,其上存储有计算机程序/指令,其特征在于,该计算机程序/指令在被执行时使得系统进行执行如上任一所述方法的步骤。According to one aspect of the present application, a computer-readable storage medium is provided, on which a computer program/instruction is stored, characterized in that, when executed, the computer program/instruction causes the system to perform any one of the methods described above. step.
根据本申请的一个方面,提供了一种计算机程序产品,包括计算机程序/指令, 其特征在于,该计算机程序/指令被处理器执行时实现如上任一所述方法的步骤。According to one aspect of the present application, a computer program product is provided, including a computer program/instruction, characterized in that when the computer program/instruction is executed by a processor, the steps of any of the above methods are implemented.
与现有技术相比,本申请通过双方交互在计算机设备端投影呈现目标用户的图像标注信息,能够为当前用户进行趣味性的互动同时,为当前用户提供更加真实、自然的增强现实交互。特别地,能够为不在孩子身边的父母增强亲子陪伴的互动和参与体验。Compared with the existing technology, this application projects and presents the image annotation information of the target user on the computer device through interaction between the two parties, which can provide the current user with interesting interactions while providing a more realistic and natural augmented reality interaction for the current user. In particular, it can enhance the interaction and participation experience of parent-child companionship for parents who are not with their children.
附图说明Description of the drawings
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present application will become more apparent by reading the detailed description of the non-limiting embodiments with reference to the following drawings:
图1示出根据本申请一个实施例的一种投影交互的系统拓扑图;Figure 1 shows a system topology diagram of projection interaction according to an embodiment of the present application;
图2示出根据本申请一个实施例的一种投影交互方法的方法流程图;Figure 2 shows a method flow chart of a projection interaction method according to an embodiment of the present application;
图3示出根据本申请一个实施例的一种投影交互方法的方法流程图;Figure 3 shows a method flow chart of a projection interaction method according to an embodiment of the present application;
图4示出根据本申请一个实施例的一种计算机设备的功能模块;Figure 4 shows a functional module of a computer device according to an embodiment of the present application;
图5示出根据本申请一个实施例的一种目标用户设备的功能模块;Figure 5 shows a functional module of a target user equipment according to an embodiment of the present application;
图6示出可被用于实施本申请中所述的各个实施例的示例性系统。Figure 6 illustrates an example system that may be used to implement various embodiments described in this application.
附图中相同或相似的附图标记代表相同或相似的部件。The same or similar reference numbers in the drawings represent the same or similar parts.
具体实施方式Detailed ways
下面结合附图对本申请作进一步详细描述。The present application will be described in further detail below in conjunction with the accompanying drawings.
在本申请一个典型的配置中,终端、服务网络的设备和可信方均包括一个或多个处理器(例如,中央处理器(Central Processing Unit,CPU))、输入/输出接口、网络接口和内存。In a typical configuration of this application, the terminal, the device of the service network and the trusted party all include one or more processors (for example, central processing unit (Central Processing Unit, CPU)), input/output interfaces, network interfaces and Memory.
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(Random Access Memory,RAM)和/或非易失性内存等形式,如只读存储器(Read Only Memory,ROM)或闪存(Flash Memory)。内存是计算机可读介质的示例。Memory may include non-permanent memory in computer-readable media, random access memory (Random Access Memory, RAM) and/or non-volatile memory, such as read-only memory (Read Only Memory, ROM) or flash memory ( Flash Memory). Memory is an example of computer-readable media.
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(Phase-Change Memory,PCM)、可编程随机存取存储器(Programmable Random Access Memory,PRAM)、静态随机存取存储器(Static Random-Access Memory,SRAM)、动态随机存取存储器(Dynamic Random Access Memory,DRAM)、其他类型的随机存取存储器 (RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能光盘(Digital Versatile Disc,DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。Computer-readable media includes both persistent and non-volatile, removable and non-removable media that can be implemented by any method or technology for storage of information. Information may be computer-readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (Static Random-Access Memory, SRAM), Dynamic Random Access Memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (Electrically-Erasable Programmable Read) -Only Memory (EEPROM), flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage , magnetic tape cassettes, magnetic tape disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
本申请所指设备包括但不限于用户设备、网络设备、或用户设备与网络设备通过网络相集成所构成的设备。所述用户设备包括但不限于任何一种可与用户进行人机交互的移动电子产品,例如智能手机、平板电脑、智能台灯等,所述移动电子产品可以采用任意操作系统,如Android操作系统、iOS操作系统等。其中,所述网络设备包括一种能够按照事先设定或存储的指令,自动进行数值计算和信息处理的电子设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑器件(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、数字信号处理器(Digital Signal Processor,DSP)、嵌入式设备等。所述网络设备包括但不限于计算机、网络主机、单个网络服务器、多个网络服务器集或多个服务器构成的云;在此,云由基于云计算(Cloud Computing)的大量计算机或网络服务器构成,其中,云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个虚拟超级计算机。所述网络包括但不限于互联网、广域网、城域网、局域网、VPN网络、无线自组织网络(Ad Hoc网络)等。优选地,所述设备还可以是运行于所述用户设备、网络设备、或用户设备与网络设备、网络设备、触摸终端或网络设备与触摸终端通过网络相集成所构成的设备上的程序。The equipment referred to in this application includes but is not limited to user equipment, network equipment, or equipment composed of user equipment and network equipment integrated through a network. The user equipment includes but is not limited to any kind of mobile electronic product that can interact with the user, such as smart phones, tablet computers, smart desk lamps, etc. The mobile electronic product can use any operating system, such as Android operating system, iOS operating system, etc. Among them, the network device includes an electronic device that can automatically perform numerical calculations and information processing according to preset or stored instructions, and its hardware includes but is not limited to microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASIC ), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), Digital Signal Processor (Digital Signal Processor, DSP), embedded devices, etc. The network equipment includes but is not limited to a computer, a network host, a single network server, multiple network server sets, or a cloud composed of multiple servers; here, the cloud is composed of a large number of computers or network servers based on cloud computing (Cloud Computing), Among them, cloud computing is a type of distributed computing, a virtual supercomputer composed of a group of loosely coupled computer sets. The network includes but is not limited to the Internet, wide area network, metropolitan area network, local area network, VPN network, wireless self-organizing network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user equipment, network equipment, or a device formed by integrating the user equipment with the network equipment, the network equipment, the touch terminal, or the network equipment and the touch terminal through a network.
当然,本领域技术人员应能理解上述设备仅为举例,其他现有的或今后可能出现的设备如可适用于本申请,也应包含在本申请保护范围以内,并在此以引用方式包含于此。Of course, those skilled in the art should understand that the above-mentioned equipment is only an example, and other existing or possible equipment that may appear in the future, if applicable to this application, should also be included in the protection scope of this application, and are included hereby by reference. this.
在本申请的描述中,“多个”的含义是两个或者更多,除非另有明确具体的限定。In the description of this application, "plurality" means two or more, unless otherwise expressly and specifically limited.
图1示出了本申请的一个典型场景,计算机设备100与目标用户设备200建立了通信连接,计算机设备100向该目标用户设备200传输对应顶部图像信息;目标用户设备200接收该顶部图像信息,并基于顶部图像信息确定对应标注信息或者图 像标注信息,随后将标注信息或者图像标注信息返回至所述计算机设备100。所述计算机设备100包括但不限于任何一种可与用户进行人机交互的电子产品,例如智能手机、平板电脑、智能台灯、智能投影装置等。所述计算机设备包括顶部摄像装置,用于从计算机设备所属当前用户的操作对象(例如,用户当前正在阅读的书籍或者操作的工作零部件等)的上方(例如,正上方或斜上方等)采集关于操作对象相关的图像信息,如摄像头或者深度摄像头等。所述目标用户设备包括但不限于任何一种可与用户进行人机交互的移动电子产品,例如智能手机、平板电脑、个人电脑等,所述目标用户设备包括显示装置,用于呈现所述顶部图像信息,例如,液晶显示屏或者投影仪等;所述目标用户设备还包括输入装置,用于采集用户关于顶部图像信息的标注信息或者图像标注信息,该标注信息包括但不限于关于顶部图像信息中交互对象的贴纸、文字、图形、视频、涂鸦、2D标记或者3D标记等标记信息,对应图像标注信息包括上述标注信息及对应标注位置信息,如该标注信息或标注信息的交互对象在图像坐标系中的图像坐标信息等,其中标注位置信息仅为举例,在此不做限定。在此,本申请中关于目标用户设备与计算机设备之间的数据传输可以是基于两个设备之间的直接通信连接进行,还可以是经由相应服务器转发完成等。Figure 1 shows a typical scenario of this application. The computer device 100 establishes a communication connection with the target user device 200. The computer device 100 transmits the corresponding top image information to the target user device 200; the target user device 200 receives the top image information. The corresponding annotation information or image annotation information is determined based on the top image information, and then the annotation information or image annotation information is returned to the computer device 100 . The computer device 100 includes but is not limited to any electronic product that can perform human-computer interaction with the user, such as smart phones, tablet computers, smart desk lamps, smart projection devices, etc. The computer equipment includes a top camera device for collecting images from above (for example, directly above or diagonally above, etc.) the operating object of the current user of the computer equipment (for example, the book that the user is currently reading or the working parts that the user is operating, etc.) Image information related to the operating object, such as camera or depth camera, etc. The target user equipment includes but is not limited to any mobile electronic product that can perform human-computer interaction with the user, such as smart phones, tablet computers, personal computers, etc. The target user equipment includes a display device for presenting the top Image information, for example, a liquid crystal display screen or a projector, etc.; the target user equipment also includes an input device for collecting the user's annotation information or image annotation information about the top image information. The annotation information includes but is not limited to information about the top image. Mark information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks of interactive objects in the interactive object. The corresponding image annotation information includes the above annotation information and the corresponding annotation position information. For example, the annotation information or the interactive object of the annotation information is at the image coordinates The image coordinate information in the system, etc., where the marked position information is only an example and is not limited here. Here, the data transmission between the target user equipment and the computer equipment in this application may be based on a direct communication connection between the two devices, or may be forwarded via a corresponding server, etc.
图2示出根据本申请一个方面的一种投影交互的方法,该方法应用于计算机设备100,可应用于图1所示的系统拓扑,该方法包括步骤S101、步骤S102以及步骤S103。在步骤S101中,通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息;在步骤S102中,获取所述目标用户设备对应目标用户的、关于所述顶部图像信息的图像标注信息,其中,所述图像标注信息包括对应标注信息及所述标注信息的标注位置信息,所述标注信息由所述目标用户的第一用户操作确定;在步骤S103中,基于所述标注位置信息确定对应投影位置信息,并基于所述投影位置信息投影呈现所述标注信息。Figure 2 shows a projection interaction method according to one aspect of the present application. The method is applied to the computer device 100 and can be applied to the system topology shown in Figure 1. The method includes step S101, step S102 and step S103. In step S101, the corresponding top image information is collected through the top camera device, and the top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information; in step S102, Obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the annotation information is composed of the The first user operation of the target user is determined; in step S103, corresponding projection position information is determined based on the annotation position information, and the annotation information is projected and presented based on the projection position information.
具体而言,在步骤S101中,通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息。例如,当前用户(例如,用户甲)持有计算机设备,通过该计算机设备可以与目标用户(例如,用户乙)持有的目标用户设备进行通信,如通过有线或者无线的方式建立计算机设备与目标用户设备之间的通信连接,或者经由网络设备 进行两者之间数据通信传输等。所述计算机设备包括顶部摄像装置,该顶部摄像装置用于采集当前用户的操作对象相关图像信息,例如,顶部摄像装置从操作对象的上方采集关于操作对象相关的图像信息(例如,正上方或斜上方等),在一些情形下,所述顶部摄像装置设置于操作对象正上方,顶部摄像装置的光轴能够穿过操作对象的形体或者中心等。当然,为了使顶部摄像装置能够满足要求,计算机设备通常设置有向前的延伸杆,并在延伸杆上将顶部摄像装置正对下方设置,采集下方的图像信息,在该延伸杆下方、计算机设备与所属用户之间位置,设置有对应操作对象的操作区域,该操作区域可以是计算机设备本身在下方的延伸区域(例如,通过计算机设备本身自带的延伸区域从而能够较为精准获取顶部图像信息),或者通过在计算机设备与所属用户之间设置空白区域作为对应的操作区域(例如,设置空白桌面作为操作区域等)。当然,在一些情形下,计算机设备的底座保持水平状态,从而保持计算机设备的稳定,相应地,对应顶部摄像装置的光轴垂直向下,所述操作区域所在面保持水平,从而使得计算机设备采集的顶部图像信息中操作对象各个区域相对于顶部摄像装置的距离相近。Specifically, in step S101, the corresponding top image information is collected through the top camera device, and the top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information. For example, the current user (for example, user A) holds a computer device, through which the computer device can communicate with the target user device held by the target user (for example, user B), such as establishing a connection between the computer device and the target through a wired or wireless manner. Communication connection between user equipment, or data communication and transmission between the two via network equipment. The computer equipment includes a top camera device, which is used to collect image information related to the operating object of the current user. For example, the top camera device collects image information related to the operating object from above the operating object (for example, directly above or diagonally). above, etc.), in some cases, the top camera device is disposed directly above the operation object, and the optical axis of the top camera device can pass through the shape or center of the operation object, etc. Of course, in order for the top camera device to meet the requirements, the computer equipment is usually provided with a forward extension rod, and the top camera device is placed directly below on the extension rod to collect the image information below. Under the extension rod, the computer equipment There is an operation area corresponding to the operation object between the user and the operation area. The operation area can be an extension area below the computer device itself (for example, the top image information can be obtained more accurately through the extension area of the computer device itself) , or by setting a blank area between the computer device and the user as the corresponding operation area (for example, setting a blank desktop as the operation area, etc.). Of course, in some cases, the base of the computer equipment remains horizontal to maintain the stability of the computer equipment. Correspondingly, the optical axis corresponding to the top camera device is vertically downward, and the surface where the operating area is located remains horizontal, thereby allowing the computer equipment to capture The distances between each area of the operation object in the top image information relative to the top camera device are similar.
所述计算机设备通过顶部摄像装置采集关于操作对象的顶部图像信息,并将该顶部图像信息直接发送或者经由网络设备发送至目标用户设备。在一些实施例中,计算机设备可以先对顶部图像信息进行识别(例如,预先设置的模板特征,通过对应模板特征进行识别和跟踪)等,从而确保该顶部图像信息中存在对应操作对象。若当前顶部图像信息中不存在操作对象,计算机设备可以调节该顶部摄像装置的摄像角度,采集其他区域的图像,从而确保顶部图像信息中存在操作对象,例如,通过调整延伸杆的延伸角度、高度或者直接调整顶部摄像装置的摄像位姿信息,从而达到改变顶部摄像装置的摄像角度的效果。若采集各个角度后一直不存在操作对象或者连续采集一定数量的顶部图像信息均不存在操作对象,则计算机设备呈现对应提示信息,用于提示当前用户当前操作区域不存在操作对象等。若当前顶部图像信息中存在操作对象,则将该顶部图像信息传输至目标用户设备。所述目标用户设备接收到顶部图像信息后,呈现该顶部图像信息,例如,通过显示屏显示该顶部图像信息或者通过投影仪投影显示该顶部图像信息。The computer device collects top image information about the operating object through the top camera device, and sends the top image information directly or via a network device to the target user device. In some embodiments, the computer device may first identify the top image information (for example, identify and track preset template features through corresponding template features), etc., thereby ensuring that the corresponding operation object exists in the top image information. If there is no operation object in the current top image information, the computer device can adjust the camera angle of the top camera device and collect images from other areas to ensure that there is an operation object in the top image information, for example, by adjusting the extension angle and height of the extension rod. Or directly adjust the camera pose information of the top camera device, thereby achieving the effect of changing the camera angle of the top camera device. If there is no operation object after collecting all angles or there is no operation object after continuously collecting a certain amount of top image information, the computer device displays corresponding prompt information to remind the current user that there is no operation object in the current operation area. If there is an operation object in the current top image information, the top image information is transmitted to the target user device. After receiving the top image information, the target user equipment presents the top image information, for example, displays the top image information through a display screen or displays the top image information through projection through a projector.
所述目标用户设备接收并呈现该顶部图像信息,供目标用户基于顶部图像信息进行操作交互等。所述目标用户设备在呈现顶部图像信息的同时,还能够通过输入 装置采集目标用户关于顶部图像信息中交互对象的标注信息,该交互对象的交互位置可以是预先确定,也可以是当前用户确定并传输至目标用户设备,还可以是基于目标用户的操作位置(例如,触摸位置、光标所处位置、手势识别结果或者语音识别结果等)确定等。若所述顶部图像信息中交互位置是预先确定或者当前用户确定的,则目标用户设备直接将该标注信息传输至计算机设备;若所述顶部图像信息中交互位置是目标用户的操作位置确定,则目标用户设备将该交互位置确定为标注位置信息,并结合标注信息生成对应图像标注信息,将该图像标注信息返回至计算机设备等。The target user device receives and presents the top image information for the target user to perform operations and interactions based on the top image information. While presenting the top image information, the target user device can also collect the target user's annotation information about the interactive object in the top image information through the input device. The interactive position of the interactive object may be predetermined, or may be determined and determined by the current user. The transmission to the target user device may also be determined based on the target user's operation position (for example, touch position, cursor position, gesture recognition result or voice recognition result, etc.). If the interaction position in the top image information is predetermined or determined by the current user, the target user device directly transmits the annotation information to the computer device; if the interaction position in the top image information is determined by the target user's operation position, then The target user device determines the interaction position as the annotation position information, combines the annotation information to generate corresponding image annotation information, and returns the image annotation information to the computer device or the like.
在步骤S102中,获取所述目标用户设备对应目标用户的、关于所述顶部图像信息的图像标注信息,其中,所述图像标注信息包括对应标注信息及所述标注信息的标注位置信息,所述标注信息由所述目标用户的第一用户操作确定。例如,所述顶部图像信息包括基于该顶部图像信息建立的图像坐标系,该图像坐标系以某个像素点(例如,图像左上角的像素点等)为原点,并以横轴作为X轴、纵轴作为Y轴建立对应图像/像素坐标系等,对应标注位置信息包括对应标注信息或标注信息的交互对象在图像坐标系中的坐标位置信息,该坐标位置信息可以是用于指示标注信息或标注信息的交互对象的中心位置或者区域范围的坐标集合等。所述标注位置信息可以是基于当前用户、目标用户的用户操作确定,也可以是预先设定的。对应标注信息由目标用户设备基于目标用户关于顶部图像的第一用户操作确定,例如,基于目标用户关于鼠标输入、键盘输入、触摸屏输入、手势输入或者语音输入操作等方式添加的标记信息确定对应标注信息;在一些情形下,该标注位置信息基于对应鼠标的点击位置、键盘输入对应位置、触摸屏的触摸位置、手势识别、语音识别结果在顶部图像信息中图像坐标信息确定,或者基于该鼠标的点击位置、键盘输入对应位置、触摸屏的触摸位置、手势识别、语音识别结果先在顶部图像信息中确定对应交互对象,再基于交互对象确定对应的图像标注位置等。In step S102, obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the The annotation information is determined by the first user operation of the target user. For example, the top image information includes an image coordinate system established based on the top image information. The image coordinate system uses a certain pixel point (for example, the pixel point in the upper left corner of the image, etc.) as the origin, and uses the horizontal axis as the X-axis, The vertical axis is used as the Y axis to establish a corresponding image/pixel coordinate system, etc. The corresponding annotation position information includes the coordinate position information of the corresponding annotation information or the interactive object of the annotation information in the image coordinate system. The coordinate position information can be used to indicate the annotation information or Mark the center position of the interactive object of the information or the coordinate set of the area range, etc. The annotation location information may be determined based on user operations of the current user and the target user, or may be preset. The corresponding annotation information is determined by the target user device based on the first user operation of the target user on the top image. For example, the corresponding annotation is determined based on the mark information added by the target user on mouse input, keyboard input, touch screen input, gesture input, or voice input operation. Information; in some cases, the annotation position information is determined based on the image coordinate information in the top image information based on the corresponding mouse click position, keyboard input corresponding position, touch screen touch position, gesture recognition, and voice recognition results, or based on the mouse click The position, corresponding position of keyboard input, touch position of the touch screen, gesture recognition, and voice recognition results are first determined in the top image information, and then the corresponding image annotation position is determined based on the interactive object.
所述标注信息包括但不限于关于顶部图像信息中交互对象的贴纸、文字、图形、视频、涂鸦、2D标记或者3D标记等标记信息,在一些实施例中,基于不同类型的标注信息,对应图像位置信息的表现形式也存在区别。The annotation information includes but is not limited to tag information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks about interactive objects in the top image information. In some embodiments, based on different types of annotation information, the corresponding image There are also differences in the presentation of location information.
在步骤S103中,基于所述标注位置信息确定对应投影位置信息,并基于所述投影位置信息投影呈现所述标注信息。例如,所述投影装置被安置于顶部摄像装置 附近,如对应投影机与顶部摄像头一同安置于顶部延伸杆上等。对应投影装置与顶部摄像装置的映射关系可以通过计算获得,例如,s1:将含特定图案(如棋盘格)的标定图像,使用投影装置投影到操作区域上;s2:顶部摄像头采集包含操作区域的视频图像;s3:使用s2步骤采集的图像,识别各个图案在显示画面中的坐标信息;s4:建立s1投影原始标定图像中图案与s2中采集到包含操作区域的视频图像的图案坐标之间的对应关系;s5:根据s4中两类坐标估算相机内、外参数,以及畸变参数;s6:使用s5得到参数用来实现两种图像之间的映射。其中,本领域技术人员应能理解,上述投影装置与顶部摄像装置的映射关系计算方法仅为举例,其他现有的或今后可能出现的投影装置与顶部摄像装置的映射关系计算方法等如可适用于本申请,也应包含在本申请保护范围以内,并在此以引用方式包含于此。基于投影装置与顶部摄像装置的映射关系,可以将所述标注位置信息换算到投影位置信息,如标注信息在投影图像对应投影图像坐标系中的投影坐标信息,其中,投影位置信息仅为举例,在此不做限定。计算机设备根据对应投影坐标信息将标注信息投影至对应操作区域,从而将标注信息呈现于操作对象中对应区域。In step S103, corresponding projection position information is determined based on the annotation position information, and the annotation information is projected and presented based on the projection position information. For example, the projection device is placed near the top camera device, such as the corresponding projector and the top camera are placed on the top extension rod, etc. The mapping relationship between the corresponding projection device and the top camera device can be obtained through calculation. For example, s1: use the projection device to project a calibration image containing a specific pattern (such as a checkerboard) onto the operating area; s2: the top camera collects images containing the operating area. Video image; s3: Use the image collected in step s2 to identify the coordinate information of each pattern in the display screen; s4: Establish the relationship between the pattern in the original calibration image projected by s1 and the pattern coordinates of the video image containing the operation area collected in s2 Correspondence; s5: Estimate camera internal and external parameters, as well as distortion parameters based on the two types of coordinates in s4; s6: Use s5 to obtain parameters to achieve mapping between the two images. Among them, those skilled in the art should understand that the above-mentioned calculation method of the mapping relationship between the projection device and the top camera device is only an example, and other existing or future methods for calculating the mapping relationship between the projection device and the top camera device may be applicable. This application shall also be included within the scope of protection of this application and are hereby incorporated by reference. Based on the mapping relationship between the projection device and the top camera device, the annotation position information can be converted to projection position information, such as the projection coordinate information of the annotation information in the coordinate system of the projection image corresponding to the projection image, where the projection position information is only an example, No limitation is made here. The computer device projects the annotation information to the corresponding operation area according to the corresponding projection coordinate information, thereby presenting the annotation information in the corresponding area in the operation object.
在一些实施方式中,所述计算机设备还包括显示装置,所述方法还包括步骤S104(未示出),在步骤S104中,接收所述目标用户设备传输的目标图像信息,通过所述显示装置呈现所述目标图像信息。例如,所述计算机设备还包括显示装置,该显示装置用于呈现计算机设备存储或者接收到的图像信息等,如液晶显示屏等。在一些情形下,便于当前用户观看对应图像信息,该显示装置安置于计算机设备正对当前用户的正前方或附近位置。计算机设备与目标用户设备进行通信过程中,为了方便双方用户进行沟通交流,提高互动的即时性和交流效率,目标用户设备设置有对应摄像装置,用于采集目标用户设备端的目标图像信息,目标图像信息包括关于目标用户的图像信息,目标用户设备可以将该目标图像信息传输至计算机设备,计算机设备接收并通过显示装置呈现该目标图像信息。在一些情形下,该目标图像信息及对应顶部图像信息包含于实时采集的视频流,则计算机设备和目标用户设备均可以通过对应显示装置呈现顶部图像信息、目标图像信息对应的实时视频流等,对应视频流中不仅包含摄像装置采集的图像,还包括通过语音录入装置采集的语音信息等;对应计算机设备、目标用户设备在显示目标图像信息、顶部图像信息对应的实时视频流的同时,还通过语音输出装置播放对应的语音信息等,也即目标用户 设备与计算机设备进行音视频通信。In some implementations, the computer device further includes a display device, and the method further includes step S104 (not shown). In step S104, the target image information transmitted by the target user device is received, and the display device The target image information is presented. For example, the computer device further includes a display device, which is used to present image information stored or received by the computer device, such as a liquid crystal display screen. In some cases, to facilitate the current user to view the corresponding image information, the display device is placed directly in front of or near the computer device facing the current user. During the communication process between the computer equipment and the target user equipment, in order to facilitate communication between the two users and improve the immediacy and communication efficiency of the interaction, the target user equipment is equipped with a corresponding camera device for collecting target image information on the target user equipment side. The target image The information includes image information about the target user, the target user device can transmit the target image information to the computer device, and the computer device receives and presents the target image information through the display device. In some cases, the target image information and the corresponding top image information are included in the video stream collected in real time, then both the computer device and the target user equipment can present the top image information, the real-time video stream corresponding to the target image information, etc. through the corresponding display device, The corresponding video stream not only includes images collected by the camera device, but also includes voice information collected by the voice input device; while the corresponding computer equipment and target user equipment display the real-time video stream corresponding to the target image information and the top image information, they also pass The voice output device plays corresponding voice information, etc., that is, the target user equipment and the computer equipment conduct audio and video communication.
在一些实施方式中,所述计算机设备还包括前置摄像装置,其中,所述前置摄像装置用于采集关于所述计算机设备对应当前用户的前置图像信息;其中,所述方法还包括步骤S105(未示出),在步骤S105中,将所述前置图像信息传输至所述目标用户设备,供所述目标用户设备呈现所述前置图像信息。例如,计算机设备还包括前置摄像装置,用于采集关于计算机设备持有者的当前用户相关的图像信息,该前置摄像装置安置于计算机设备正对当前用户的一方,例如,可以设置于显示装置的上方。在一些情形下,该前置摄像装置主要用于采集当前用户的用户头部相关的图像信息,用于实现当前用户与目标用户之间的视频互动交流。对应前置摄像装置在启用时采集关于当前用户的前置图像信息,并将该前置图像信息传输至目标用户设备,供目标用户设备的显示装置进行显示,在一些情形下,该前置图像信息包含于实时采集的视频流,则目标用户设备可以通过对应显示装置呈现前置图像信息对应的实时视频流等,在此,该前置摄像装置的启用与顶部摄像装置的启用可以存在一定联系,也可以是相互独立的摄像装置,对应目标用户设备可以同时呈现两个摄像装置的视频流,或者呈现该两个摄像装置的视频流之一等。所述前置摄像装置的启用状态可以是基于目标用户设备或者计算机设备的视频建立请求而开启,或者从顶部摄像装置的启用状态切换而来,还或者基于该前置摄像装置的独立的启用控件的触发而开启等。In some embodiments, the computer equipment further includes a front-facing camera device, wherein the front-facing camera device is used to collect front-facing image information about the current user of the computer device; wherein the method further includes the steps S105 (not shown), in step S105, the preamble image information is transmitted to the target user equipment, so that the target user equipment can present the preamble image information. For example, the computer device further includes a front-facing camera device for collecting image information related to the current user of the computer device holder. The front-facing camera device is placed on the side of the computer device facing the current user. For example, it can be disposed on the display above the device. In some cases, the front-facing camera device is mainly used to collect image information related to the current user's head, and is used to realize video interactive communication between the current user and the target user. The corresponding front-facing camera device collects front-facing image information about the current user when enabled, and transmits the front-facing image information to the target user device for display by the display device of the target user device. In some cases, the front-facing image information The information is included in the video stream collected in real time, then the target user equipment can present the real-time video stream corresponding to the front image information through the corresponding display device. Here, there may be a certain connection between the activation of the front camera device and the activation of the top camera device. , or they can be mutually independent camera devices, and the corresponding target user equipment can simultaneously present the video streams of the two camera devices, or present one of the video streams of the two camera devices, and so on. The enabled state of the front camera device may be enabled based on a video creation request from the target user equipment or computer device, or switched from the enabled state of the top camera device, or based on an independent enable control of the front camera device. Turn on when triggered.
在一些实施方式中,所述方法还包括步骤S106(未示出),在步骤S106中,获取关于所述计算机设备与所述目标用户设备的当前视频交互的摄像切换请求,其中,所述当前视频交互的图像信息包括所述前置图像信息;其中,在步骤S101中,响应于所述摄像切换请求,关闭所述前置摄像装置并启用所述顶部摄像装置,通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息。例如,所述计算机设备的前置摄像装置与顶部摄像装置同一时刻仅有一个处于启用状态,从而减少视频交互的带宽压力,保证视频交互过程的效率和有序性等。在一些实施例中,所述计算机设备设置有对应摄像切换控件,该摄像切换控件可以是设置于计算机设备上的实体按键,也可以是呈现于当前屏幕中的虚拟控件等,摄像切换控件用于实现从前置摄像装置的启用状态切换至顶部摄像装置的启用状态,在一些情形下,该摄像切换 控件还用于将计算机设备从顶部摄像装置的启用状态调节至前置摄像装置的启用状态,换言之,该摄像切换控件用于实现顶部摄像装置与前置摄像装置的启用状态的切换;在另一些情形下,该摄像切换控件仅用于实现从前置摄像装置的启用状态切换至顶部摄像装置的启用状态,计算机设备还设置有对应摄像还原控件,摄像还原控件用于将计算机设备从顶部摄像装置的启用状态调节至前置摄像装置的启用状态。在另一些实施例中,所述计算机设备通过识别当前用户的手势、语音、头部运动等交互输入操作,确定摄像切换请求。在另一些实施例中,类似地,所述目标用户设备也可以设置有对应摄像切换控件,或者,目标用户设备通过识别目标用户的手势、语音、头部运动等交互输入操作,确定摄像切换请求,在此不再赘述。计算机设备在视频交互过程中处于采集前置图像信息的状态,当获取用户(例如,当前用户或者目标用户均可)关于摄像切换控件的触控操作,计算机设备关闭前置摄像装置并启用顶部摄像装置,通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息。进一步地,目标用户可以对顶部图像信息进行第一用户操作确定该顶部图像信息的图标标注信息,计算机设备基于标注位置信息确定投影位置信息,并基于投影位置信息投影呈现标注信息,由此,目标用户可以对当前用户的操作对象进行直观的指导。In some implementations, the method further includes step S106 (not shown). In step S106, a camera switching request regarding the current video interaction between the computer device and the target user device is obtained, wherein the current The image information of the video interaction includes the front image information; wherein, in step S101, in response to the camera switching request, the front camera device is turned off and the top camera device is enabled, and the top camera device collects data through the top camera device. The corresponding top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information. For example, only one of the front camera device and the top camera device of the computer equipment is enabled at the same time, thereby reducing the bandwidth pressure of video interaction and ensuring the efficiency and orderliness of the video interaction process. In some embodiments, the computer device is provided with a corresponding camera switching control. The camera switching control may be a physical button provided on the computer device or a virtual control presented on the current screen. The camera switching control is used to Realize switching from the enabled state of the front camera device to the enabled state of the top camera device. In some cases, the camera switching control is also used to adjust the computer device from the enabled state of the top camera device to the enabled state of the front camera device, In other words, the camera switching control is used to switch the activation state of the top camera device and the front camera device; in other cases, the camera switching control is only used to switch from the activation state of the front camera device to the top camera device. The computer device is also provided with a corresponding camera restore control, and the camera restore control is used to adjust the computer device from the enabled state of the top camera device to the enabled state of the front camera device. In other embodiments, the computer device determines the camera switching request by recognizing the current user's interactive input operations such as gestures, voice, and head movements. In other embodiments, similarly, the target user device may also be provided with a corresponding camera switching control, or the target user device determines the camera switching request by recognizing the target user's gestures, voice, head movements and other interactive input operations. , which will not be described in detail here. The computer device is in a state of collecting front image information during the video interaction process. When the user (for example, the current user or the target user) touches the camera switching control, the computer device turns off the front camera device and enables the top camera. A device that collects corresponding top image information through the top camera device, and transmits the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information. Further, the target user can perform a first user operation on the top image information to determine the icon annotation information of the top image information. The computer device determines the projection position information based on the annotation position information, and projects and presents the annotation information based on the projection position information. Thus, the target Users can provide intuitive guidance on the current user's operation objects.
在一些情形下,不同摄像装置的启用状态对应不同的交互模式,例如,计算机设备当前仅开启前置摄像装置时,计算机设备处于视频沟通模式,用于实现当前用户与目标用户的视频沟通交流等;计算机设备当前仅开启顶部摄像装置时,计算机设备处于视频指导模式,用于实现目标用户关于当前用户的操作对象的沟通指导等;计算机设备当前同时开启前置摄像装置和顶部摄像装置时,计算机设备实现目标用户关于当前用户的操作对象的沟通指导的同时,还方便目标用户与当前用户进行视频沟通交流等。In some cases, the activation status of different camera devices corresponds to different interaction modes. For example, when the computer device currently only turns on the front camera device, the computer device is in the video communication mode, which is used to achieve video communication between the current user and the target user, etc. ; When the computer device currently only turns on the top camera device, the computer device is in the video guidance mode, which is used to achieve communication guidance for the target user about the current user's operation object; when the computer device currently turns on both the front camera device and the top camera device, the computer device The device not only enables the target user to communicate and guide the current user's operation object, but also facilitates video communication between the target user and the current user.
在一些实施方式中,在步骤S105中,接收所述目标用户设备传输的、关于所述计算机设备与目标用户设备的当前视频交互的摄像切换请求,其中,所述摄像切换请求基于所述目标用户的第二用户操作确定。例如,所述目标用户设备可以基于目标用户的第二用户操作(例如,关于摄像切换控件的触发操作,或者关于摄像切换的交互输入操作指令等),目标用户设备可以生成关于当前视频交互的摄像切换 请求,目标用户设备将该摄像切换请求传输至对应计算机设备;或者,目标用户设备将该第二用户操作发送至对应服务器,服务器根据该第二用户操作生成对应摄像切换请求,并将该摄像切换请求发送至计算机设备等。在此,所述第一用户操作、第二用户操作仅用于区分用户操作对应作用,并不涉及次序、大小等操作的关联。目标用户设备在视频交互建立之后,在获取并呈现前置摄像装置的前置图像信息时,在一些实施例中,在当前屏幕中呈现对应的摄像切换控件,目标用户设备可以通过采集第二用户操作,如目标用户关于该摄像切换控件的触控操作,确定摄像切换请求,在另一些实施例中,目标用户设备可以通过采集第二用户操作,如目标用户的手势、语音、头部运动等交互输入操作指令,确定摄像切换请求。该摄像切换请求通常发生在视频建立之后,在视频建立时,对应前置摄像装置可以基于视频建立请求同时启用,也可以是在目标用户设备与计算机设备建立通信之后基于双方的交互操作启用前置摄像装置等,还可以是在目标用户设备与计算机设备建立通信并启用顶部摄像装置之后从顶部摄像装置的启用状态切换至该前置摄像装置的启用状态等。In some implementations, in step S105, receive a camera switching request transmitted by the target user equipment regarding the current video interaction between the computer device and the target user equipment, wherein the camera switching request is based on the target user equipment. The second user operation is determined. For example, the target user device may generate a video image of the current video interaction based on the target user's second user operation (for example, a trigger operation on the camera switching control, or an interactive input operation instruction on the camera switch, etc.). Switching request, the target user equipment transmits the camera switching request to the corresponding computer device; or, the target user equipment sends the second user operation to the corresponding server, and the server generates the corresponding camera switching request based on the second user operation, and sends the camera switching request to the corresponding computer device. Handover requests are sent to computer equipment, etc. Here, the first user operation and the second user operation are only used to distinguish the corresponding functions of the user operations, and do not involve the correlation of the order, size and other operations. After the video interaction is established, when the target user equipment obtains and presents the front image information of the front camera device, in some embodiments, the corresponding camera switching control is presented in the current screen, and the target user equipment can collect the second user's information by Operation, such as the target user's touch operation on the camera switching control, determines the camera switching request. In other embodiments, the target user device can collect the second user operation, such as the target user's gesture, voice, head movement, etc. Interactively input operation instructions to confirm the camera switching request. The camera switching request usually occurs after the video is established. When the video is established, the corresponding front-facing camera device can be enabled simultaneously based on the video creation request, or the front-facing camera device can be enabled based on the interaction between the two parties after the target user device and the computer device establish communication. The camera device, etc. can also be switched from the enabled state of the top camera device to the enabled state of the front camera device after the target user equipment establishes communication with the computer device and activates the top camera device.
在一些实施方式中,在步骤S105中,获取关于所述当前视频交互的视频建立请求;响应于所述视频建立请求,基于所述视频建立请求建立所述计算机设备与所述目标用户设备的所述当前视频交互,通过所述前置摄像装置采集对应前置图像信息并传输至所述目标用户设备,供所述目标用户设备呈现所述前置图像信息。例如,所述当前视频交互可以是基于当前用户或者目标用户的用户操作确定的视频建立请求而发起,在当前视频交互之初便在目标用户设备与计算机设备之间传输视频流,如计算机设备向目标用户设备传输对应前置图像信息,目标用户设备向计算机设备传输对应目标图像信息,或者,仅计算机设备向目标用户设备单独传输前置图像信息等。该视频建立请求可以是基于当前用户在计算机设备端的发起操作(例如,关于视频建立控件的触发操作,或者关于视频建立的交互输入操作指令等)确定,该视频建立请求包括对应目标用户的用户标识信息,用户标识信息包括但不限于用于标识目标用户的唯一性标记信息,具体地,例如名称、图像、身份证、手机号、应用序列号或者设备存取控制地址信息等;计算机设备可以将该视频建立请求发送至网络设备,供网络设备建立转发至目标用户设备并建立两者的视频交互,或者计算机设备直接将该视频建立请求发送至目标用户设备并建立两者的视频交互。还 如,该视频建立请求可以是基于目标用户在目标用户设备端的发起操作(例如,关于视频建立控件的触发操作,或者关于视频建立的交互输入操作指令等)确定,该视频建立请求包括对应当前用户的用户标识信息;目标用户设备可以将该视频建立请求发送至网络设备,供网络设备建立转发至计算机设备并建立两者的视频交互,或者目标用户设备直接将该视频建立请求发送至计算机设备并建立两者的视频交互。In some implementations, in step S105, obtain a video establishment request regarding the current video interaction; in response to the video establishment request, establish all connections between the computer device and the target user device based on the video establishment request. For the current video interaction, the corresponding front image information is collected through the front camera device and transmitted to the target user equipment, so that the target user equipment can present the front image information. For example, the current video interaction may be initiated based on a video creation request determined by the user operation of the current user or the target user. At the beginning of the current video interaction, the video stream is transmitted between the target user device and the computer device. For example, the computer device transmits a video stream to the computer device. The target user device transmits the corresponding pre-image information, and the target user device transmits the corresponding target image information to the computer device, or only the computer device transmits the pre-image information to the target user device alone, etc. The video creation request may be determined based on the current user's initiated operation on the computer device (for example, a triggering operation on a video creation control, or an interactive input operation instruction on video creation, etc.), and the video creation request includes a user identification corresponding to the target user. Information, user identification information includes but is not limited to unique tag information used to identify the target user, specifically, such as name, image, ID card, mobile phone number, application serial number or device access control address information, etc.; the computer device can The video creation request is sent to the network device for the network device to forward to the target user device and establish video interaction between the two, or the computer device directly sends the video creation request to the target user device and establishes video interaction between the two. For another example, the video creation request may be determined based on the target user's initiating operation on the target user device (for example, a triggering operation on the video creation control, or an interactive input operation instruction on the video creation, etc.), and the video creation request includes information corresponding to the current User identification information of the user; the target user device can send the video creation request to the network device for the network device to forward to the computer device and establish video interaction between the two, or the target user device can directly send the video creation request to the computer device And establish video interaction between the two.
在一些实施方式中,所述方法还包括步骤S107(未示出),在步骤S107中,获取关于所述计算机设备与目标用户设备的当前视频交互的摄像还原请求;响应于所述摄像还原请求,关闭所述顶部摄像装置并启用所述前置摄像装置,通过所述前置摄像装置采集对应前置图像信息,将所述前置图像信息传输至对应所述目标用户设备。例如,所述摄像还原请求用于将计算机设备从顶部摄像装置的启用状态调整至前置摄像装置的启用状态,该还原状态后对应顶部摄像装置处于关闭状态,仅进行前置图像信息的采集和传输,如计算机设备向目标用户设备传输对应前置图像信息,目标用户设备向计算机设备传输对应目标图像信息,或者,仅计算机设备向目标用户设备单独传输前置图像信息等。在一些实施例中,所述摄像还原请求基于当前用户/目标用户关于摄像还原控件的触控操作而发起,在另一些实施例中,所述摄像还原请求基于当前用户/目标用户关于摄像还原的交互输入操作指令(如手势、语音、头部运动等)而发起,基于该摄像还原请求,计算机设备关闭当前视频交互处于启用状态的顶部摄像装置,并启用对应前置摄像装置,从而实现从对操作对象的视频指导还原成双方的视频沟通交流。In some implementations, the method further includes step S107 (not shown). In step S107, obtain a camera restoration request regarding the current video interaction between the computer device and the target user device; respond to the camera restoration request , turn off the top camera device and enable the front camera device, collect corresponding front image information through the front camera device, and transmit the front image information to the corresponding target user equipment. For example, the camera restoration request is used to adjust the computer device from the enabled state of the top camera device to the enabled state of the front camera device. After the restored state, the corresponding top camera device is in a closed state, and only the front image information is collected and Transmission, such as the computer device transmitting the corresponding pre-image information to the target user device, the target user device transmitting the corresponding target image information to the computer device, or only the computer device transmitting the pre-image information to the target user device alone, etc. In some embodiments, the camera restoration request is initiated based on the current user/target user's touch operation on the camera restoration control. In other embodiments, the camera restoration request is based on the current user/target user's touch operation on the camera restoration. Initiated by interactively inputting operation instructions (such as gestures, voice, head movements, etc.), based on the camera restoration request, the computer device turns off the top camera device that is currently enabled for video interaction, and enables the corresponding front camera device, thereby achieving the goal of returning the camera to the camera. The video guidance of the operating object is restored to the video communication between both parties.
在一些实施方式中,所述方法还包括步骤S108(未示出),在步骤S108中,获取关于所述计算机设备与目标用户设备的视频交互的摄像开启请求;其中,在步骤S101中,响应于所述摄像开启请求,通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备;在步骤S105中,响应于所述摄像开启请求,通过所述前置摄像装置采集对应的前置图像信息,将所述前置图像信息传输至所述目标用户设备。例如,所述摄像开启请求用于同时启用前置摄像装置和顶部摄像装置,该摄像开启请求可以包含于视频建立请求中,用于在视频建立过程中同时启用前置摄像装置与顶部摄像装置,并分别呈现关于该前置摄像装置与顶部摄像装置的关闭控件,供当前用户和/或目标用户对前置摄像装置或顶部 摄像装置进行关闭操作,当然,若同时关闭前置摄像装置和顶部摄像装置,则关闭计算机设备与目标用户设备之间的视频交互。在一些情形下,该摄像开启请求还可以是在视频交互建立后即视频交互过程(视频交互过程中,前置摄像装置与顶部摄像装置之一处于启用状态)中调用前置摄像装置或顶部摄像装置,达到两个摄像装置同时处于启用状态的效果。在一些实施例中,所述摄像开启请求可以是基于当前用户或者目标用户关于开启控件的触控操作生成,在另一些实施例中,所述摄像开启请求可以是基于当前用户或者目标用户关于摄像开启的交互输入操作指令(如手势、语音、头部运动等)而生成,基于计算机设备或者目标用户设备对于摄像开启请求的响应而同时启用两个摄像装置。In some implementations, the method further includes step S108 (not shown). In step S108, a camera start request regarding video interaction between the computer device and the target user device is obtained; wherein, in step S101, in response to In response to the camera startup request, the corresponding top image information is collected through the top camera device, and the top image information is transmitted to the corresponding target user equipment; in step S105, in response to the camera startup request, through the The front camera device collects corresponding front image information and transmits the front image information to the target user equipment. For example, the camera start request is used to enable the front camera device and the top camera device at the same time. The camera start request can be included in the video creation request to enable the front camera device and the top camera device at the same time during the video creation process. And respectively present the closing controls for the front camera device and the top camera device for the current user and/or the target user to close the front camera device or the top camera device. Of course, if the front camera device and the top camera are closed at the same time, device, turns off video interaction between the computer device and the target user device. In some cases, the camera start request may also be to call the front camera device or the top camera during the video interaction process (during the video interaction process, one of the front camera device and the top camera device is enabled). device to achieve the effect of two camera devices being enabled at the same time. In some embodiments, the camera startup request may be generated based on the touch operation of the current user or the target user on opening the control. In other embodiments, the camera startup request may be generated based on the current user or the target user's touch operation on the camera. The interactive input operation instructions (such as gestures, voice, head movements, etc.) to start are generated, and the two camera devices are enabled simultaneously based on the response of the computer device or the target user device to the camera start request.
在一些实施方式中,所述计算机设备包括照明装置;其中,所述方法还包括步骤S109(未示出),在步骤S109中,若获取到关于所述照明装置的启用请求,开启所述照明装置。例如,计算机设备可以包括照明装置,用于调节操作区域亮度。所述启用请求用于开启对应照明装置,对操作区域投射一定强度的光线从而改变环境亮度,该启用请求可以是计算机设备基于当前用户的操作生成,还可以是基于目标用户设备对应目标用户的操作(如关于照明控件的触控等)生成并传输至计算机设备等。在一些情形下,该启用请求包含于对应视频建立请求,用于在进行视频交互建立同时,开启照明装置;在另一些情形下,该启用请求基于目标用户/当前用户在视频交互进行过程中、或者未进行视频交互时的用户操作确定,从而开启照明装置调节环境亮度。In some embodiments, the computer device includes a lighting device; wherein the method further includes step S109 (not shown). In step S109, if an enabling request for the lighting device is obtained, turning on the lighting device. For example, the computer equipment may include a lighting device for adjusting the brightness of the operating area. The activation request is used to turn on the corresponding lighting device and project a certain intensity of light on the operating area to change the brightness of the environment. The activation request can be generated by the computer device based on the current user's operation, or it can also be based on the target user's operation corresponding to the target user's equipment. (such as touch controls on lighting controls, etc.) are generated and transmitted to computer equipment, etc. In some cases, the enable request is included in the corresponding video creation request, which is used to turn on the lighting device while the video interaction is being established; in other cases, the enable request is based on the target user/current user during the video interaction. Or the user operation is determined when there is no video interaction, thereby turning on the lighting device to adjust the ambient brightness.
在一些实施方式中,所述计算机设备还包括环境光检测装置,所述方法还包括步骤S110(未示出),在步骤S110中,基于所述环境光检测装置获取当前环境的光照强度信息,检测所述光照强度信息是否满足预设照明阈值;若不满足,则调节所述照明装置,直至所述光照强度信息满足所述预设照明阈值。例如,计算机设备的照明装置包括亮度可调节的照明装置,例如,可以基于当前用户或者目标用户关于亮度调节控件的触控选择操作等,对照明装置的照明亮度进行调节,又例如,可以基于当前用户或者目标用户关于亮度调节的交互输入操作指令(如手势、语音、头部运动等)等,对照明装置的照明亮度进行调节。在一些情形下,所述计算机设备包括环境光检测装置,用于配合照明装置实现照明灯光的自动化调节,保证环境亮度的可控性及适用性等。计算机设备基于该环境光检测装置测量当前环境的光照 强度信息,并将该光照强度信息与预设照明阈值进行比较,该预设照明阈值可以是一个具体的光照强度数值,也可以多个光照强度数值组成的区间等,在此不做限制。若所述光照强度信息与照明阈值信息相同或者强度差值小于预设差值阈值等,则确定该光照强度信息满足预设照明阈值;或者,若所述光照强度信息处于照明阈值区间内,则确定该光照强度信息满足预设照明阈值等。若不满足,则基于该光照强度信息与照明阈值信息计算对应光照调节信息,并基于光照调节信息调节对应照明装置,所述光照调节信息包括对应当前光照强度信息的增大或者减少的光照调节数值等。In some embodiments, the computer device further includes an ambient light detection device, and the method further includes step S110 (not shown). In step S110, the illumination intensity information of the current environment is obtained based on the ambient light detection device, Detect whether the light intensity information meets the preset lighting threshold; if not, adjust the lighting device until the light intensity information meets the preset lighting threshold. For example, the lighting device of the computer equipment includes a lighting device with adjustable brightness. For example, the lighting brightness of the lighting device can be adjusted based on the touch selection operation of the brightness adjustment control by the current user or the target user. For example, the lighting brightness of the lighting device can be adjusted based on the current user's or target user's touch selection operation on the brightness adjustment control. The user or the target user interactively inputs operation instructions (such as gestures, voice, head movements, etc.) regarding brightness adjustment to adjust the lighting brightness of the lighting device. In some cases, the computer equipment includes an ambient light detection device, which is used to cooperate with the lighting device to realize automatic adjustment of lighting and ensure the controllability and applicability of ambient brightness. The computer equipment measures the light intensity information of the current environment based on the ambient light detection device, and compares the light intensity information with a preset lighting threshold. The preset lighting threshold can be a specific light intensity value or multiple light intensities. The range of numerical values, etc., is not limited here. If the illumination intensity information is the same as the illumination threshold information or the intensity difference is less than the preset difference threshold, etc., then it is determined that the illumination intensity information satisfies the preset illumination threshold; or, if the illumination intensity information is within the illumination threshold interval, then It is determined that the light intensity information satisfies a preset lighting threshold, etc. If not, the corresponding lighting adjustment information is calculated based on the lighting intensity information and the lighting threshold information, and the corresponding lighting device is adjusted based on the lighting adjustment information. The lighting adjustment information includes an increasing or decreasing lighting adjustment value corresponding to the current lighting intensity information. wait.
在一些实施方式中,所述方法还包括步骤S111(未示出),在步骤S111中,基于所述顶部图像信息确定所述顶部图像信息对应图像亮度信息,检测所述图像亮度信息是否满足预设亮度阈值;若不满足,则调节所述照明装置,直至所述图像亮度信息满足所述预设亮度阈值。例如,计算机设备还可以基于顶部图像信息确定对应图像亮度信息,例如,基于顶部图像信息中部分(例如,抽样取部分像素等)或者全部像素的像素亮度信息计算当前图像信息的平均亮度信息等。并将该图像亮度信息与预设亮度阈值进行比较,该预设亮度阈值可以是一个具体的图像亮度数值,也可以多个图像亮度数值组成的区间等,在此不做限制。若所述图像亮度信息与亮度阈值信息相同或者亮度差值小于预设差值阈值等,则确定该图像亮度信息满足预设亮度阈值;或者,若所述图像亮度信息处于亮度阈值区间内,则确定该图像亮度信息满足预设亮度阈值等。若不满足,则基于该图像亮度信息与亮度阈值信息计算对应光照调节信息,并基于光照调节信息调节对应照明装置,所述光照调节信息包括对应当前光照强度信息的增大或者减少的光照调节数值等。在一些情形下,所述计算机设备还可以基于顶部图像信息的特定区域的图像亮度信息进行照明装置的亮度调节,该特定区域可以是基于交互对象确定的交互区域(例如,边界区域、外接矩形区域等)或者基于目标用户/当前用户的用户操作从顶部图像信息中确定。In some embodiments, the method further includes step S111 (not shown). In step S111, determine the image brightness information corresponding to the top image information based on the top image information, and detect whether the image brightness information meets a predetermined Set a brightness threshold; if it is not met, adjust the lighting device until the image brightness information meets the preset brightness threshold. For example, the computer device can also determine the corresponding image brightness information based on the top image information, for example, calculate the average brightness information of the current image information based on the pixel brightness information of part (for example, sampling some pixels, etc.) or all pixels in the top image information. The image brightness information is compared with a preset brightness threshold. The preset brightness threshold can be a specific image brightness value, or an interval composed of multiple image brightness values, etc., which is not limited here. If the image brightness information is the same as the brightness threshold information or the brightness difference is less than the preset difference threshold, etc., then it is determined that the image brightness information satisfies the preset brightness threshold; or if the image brightness information is within the brightness threshold interval, then It is determined that the image brightness information satisfies a preset brightness threshold, etc. If not, the corresponding lighting adjustment information is calculated based on the image brightness information and the brightness threshold information, and the corresponding lighting device is adjusted based on the lighting adjustment information. The lighting adjustment information includes the lighting adjustment value corresponding to the increase or decrease of the current lighting intensity information. wait. In some cases, the computer device can also adjust the brightness of the lighting device based on the image brightness information of a specific area of the top image information. The specific area can be an interaction area determined based on the interactive object (for example, a boundary area, a circumscribed rectangular area etc.) or determined from the top image information based on user operations of the target user/current user.
在一些实施方式中,所述方法还包括步骤S112(未示出),在步骤S112中,获取所述当前用户关于所述顶部图像信息的交互位置信息;基于所述交互位置信息确定对应的虚拟呈现信息,并通过所述投影装置投影呈现所述虚拟呈现信息。例如,在视频交互过程中,关于顶部图像信息中交互对象的交互位置信息,可以是基于当前用户的用户操作得到,如,当前用户在显示装置中呈现对应顶部图像信息,并基 于用户的框选、点击或者触摸等操作,从顶部图像信息中确定一个或多个像素位置或者像素区域,将该一个或多个像素位置或者像素区域作为对应交互位置信息,该交互位置信息包括在顶部图像信息的图像坐标系中的坐标位置信息等。还如,计算机设备基于当前用户在操作对象上的用户操作(例如,用手指或笔尖指向某位置等),通过图像识别技术确定手指或者笔等所指位置,将所指位置作为对应交互位置信息。In some embodiments, the method further includes step S112 (not shown). In step S112, the interactive position information of the current user with respect to the top image information is obtained; and the corresponding virtual position information is determined based on the interactive position information. Present information, and project and present the virtual presentation information through the projection device. For example, during the video interaction process, the interactive position information about the interactive object in the top image information can be obtained based on the user operation of the current user. For example, the current user presents the corresponding top image information on the display device, and based on the user's frame selection , click or touch operations, determine one or more pixel positions or pixel areas from the top image information, and use the one or more pixel positions or pixel areas as corresponding interactive position information. The interactive position information is included in the top image information. Coordinate position information in the image coordinate system, etc. For another example, based on the current user's user operation on the operation object (for example, pointing a finger or pen tip to a certain position, etc.), the computer device determines the pointed position of the finger or pen through image recognition technology, and uses the pointed position as the corresponding interactive position information. .
计算机设备可以直接基于该交互位置信息呈现对应的虚拟呈现信息,例如,计算机设备从数据库中匹配与交互位置信息对应的虚拟信息,又如,通过对交互位置信息对应的交互对象进行目标识别,从而在数据库中匹配与之对应的虚拟信息,将对应虚拟信息确定为虚拟呈现信息,并通过交互位置信息确定该虚拟呈现信息的投影位置信息,从而通过投影装置将虚拟呈现信息投影至交互对象所处空间位置等。还如,在一些实施方式中,所述计算机设备包括红外测量装置;其中,所述获取所述当前用户关于所述顶部图像信息的交互位置信息,包括:通过所述红外测量装置确定所述顶部图像信息的交互位置信息。例如,计算机设备还包括红外测量装置,在一些实施例中,红外测量装置包括红外摄像头和红外发射器,例如,该红外摄像头与顶部摄像头一同安置于顶部延伸杆上等,该红外发射器安装在计算机设备底座上,该红外发射器在操作对象表面形成一个高于表面一定距离阈值的不可见光膜,当手指或任何不透明的物体接触该表面时,光线被反射到红外摄像头,再通过对光电位置的精确计算,得到手指或任何不透明的物体接触操作对象的位置。在另一些实施例中,红外测量装置包括红外摄像头和红外笔,当前用户使用红外笔接触操作对象表面时,红外摄像头能够确定红外笔接触操作对象的位置。基于手指、任何不透明的物体或者红外笔接触操作对象的位置确定对应交互位置信息等。The computer device can directly present the corresponding virtual presentation information based on the interactive position information. For example, the computer device matches the virtual information corresponding to the interactive position information from the database, or for example, by performing target recognition on the interactive object corresponding to the interactive position information, thereby Match the corresponding virtual information in the database, determine the corresponding virtual information as virtual presentation information, and determine the projection position information of the virtual presentation information through the interactive position information, so as to project the virtual presentation information to the interactive object through the projection device Spatial location, etc. For example, in some embodiments, the computer device includes an infrared measurement device; wherein, obtaining the interactive position information of the current user with respect to the top image information includes: determining the top position through the infrared measurement device. Interactive position information of image information. For example, the computer equipment also includes an infrared measurement device. In some embodiments, the infrared measurement device includes an infrared camera and an infrared emitter. For example, the infrared camera is installed on the top extension pole together with the top camera, and the infrared emitter is installed on On the base of the computer equipment, the infrared emitter forms an invisible light film on the surface of the operating object that is higher than a certain distance threshold on the surface. When a finger or any opaque object touches the surface, the light is reflected to the infrared camera, and then passes through the photoelectric position. Accurate calculation to obtain the position of the finger or any opaque object touching the operating object. In other embodiments, the infrared measurement device includes an infrared camera and an infrared pen. When the current user uses the infrared pen to touch the surface of the operating object, the infrared camera can determine the position where the infrared pen contacts the operating object. Determine the corresponding interactive position information based on the position of the finger, any opaque object or the infrared pen touching the operating object.
在一些实施方式中,在步骤S102中,将所述交互位置信息发送至所述目标用户设备,接收所述目标用户设备传输的、基于所述交互位置信息返回的标注信息,从而确定所述顶部图像信息的图像标注信息,其中,所述图像标注信息包括所述标注信息及所述交互位置信息,所述标注信息由所述目标用户的第一用户操作确定。例如,所述交互位置信息用于提示目标用户当前在顶部图像信息中交互对象所处区域,目标用户设备接收计算机设备传输的顶部图像信息的同时还接收该交互位置信息,并采集目标用户关于该交互位置信息对应交互对象的用户操作,从而基于第一 用户操作确定对应标注信息。目标用户设备直接将该标注信息返回至计算机设备,计算机设备基于接收到的标注信息,结合之前确定的交互位置信息,实现标注信息的投影呈现等。In some implementations, in step S102, the interaction location information is sent to the target user equipment, and the annotation information transmitted by the target user equipment and returned based on the interaction location information is received, thereby determining the top Image annotation information of image information, wherein the image annotation information includes the annotation information and the interactive position information, and the annotation information is determined by the first user operation of the target user. For example, the interactive location information is used to prompt the target user to the area where the interactive object is currently located in the top image information. The target user device receives the top image information transmitted by the computer device and also receives the interactive location information, and collects the target user's information about the interactive location information. The interactive position information corresponds to the user operation of the interactive object, so that the corresponding annotation information is determined based on the first user operation. The target user device directly returns the annotation information to the computer device, and the computer device implements projection and presentation of the annotation information based on the received annotation information and combined with the previously determined interactive position information.
在一些实施方式中,所述计算机设备还包括距离测量装置;其中,所述方法还包括步骤S113(未示出),在步骤S113中,通过所述距离测量装置确定所述当前用户与所述计算机设备之间的距离信息,若所述距离信息不满足预设距离阈值,所述计算机设备发出通知。例如,所述距离测量装置设置于所述计算机设备平行于对应操作区域、正对当前用户的一方,用于实现计算机设备与当前用户的实时距离信息的测量,如激光测距仪等。计算机设备设置有对应距离阈值区间,当所述计算机设备与当前用户的距离信息处于该距离阈值区间时,确定距离信息满足预设距离阈值,当前用户的位姿符合要求。若所述计算机设备与当前用户的距离信息处于距离阈值区间之外,则确定距离信息不满足预设距离阈值,计算机设备发出对应提示通知,如通过声音、图像、震动、文字等形式提醒当前用户需要调整姿态,以保证对应距离信息满足预设距离阈值。In some embodiments, the computer device further includes a distance measurement device; wherein the method further includes step S113 (not shown). In step S113, it is determined by the distance measurement device that the distance between the current user and the Distance information between computer devices. If the distance information does not meet a preset distance threshold, the computer device sends a notification. For example, the distance measurement device is disposed on a side of the computer device parallel to the corresponding operating area and facing the current user, and is used to measure real-time distance information between the computer device and the current user, such as a laser rangefinder. The computer device is provided with a corresponding distance threshold interval. When the distance information between the computer device and the current user is within the distance threshold interval, it is determined that the distance information satisfies the preset distance threshold and the current user's posture meets the requirements. If the distance information between the computer device and the current user is outside the distance threshold interval, it is determined that the distance information does not meet the preset distance threshold, and the computer device issues a corresponding prompt notification, such as reminding the current user through sound, image, vibration, text, etc. The posture needs to be adjusted to ensure that the corresponding distance information meets the preset distance threshold.
图3示出根据本申请一个方面的一种投影交互方法,其中,该方法适用于图1所示系统,应用于目标用户设备200,主要包括步骤S201、步骤S202以及步骤S203。在步骤S201中,接收并呈现对应计算机设备传输的、通过对应顶部摄像装置采集的顶部图像信息;在步骤S202中,获取目标用户设备对应目标用户的第一用户操作,基于所述第一用户操作生成关于所述顶部图像信息的标注信息;在步骤S203中,将所述标注信息返回至所述计算机设备,供所述计算机设备通过对应投影装置呈现所述标注信息。Figure 3 shows a projection interaction method according to one aspect of the present application, wherein the method is applicable to the system shown in Figure 1 and applied to the target user equipment 200, and mainly includes steps S201, S202 and S203. In step S201, receive and present the top image information transmitted by the corresponding computer device and collected by the corresponding top camera device; in step S202, obtain the first user operation of the target user corresponding to the target user equipment, and based on the first user operation Generate annotation information about the top image information; in step S203, return the annotation information to the computer device for the computer device to present the annotation information through a corresponding projection device.
例如,当前用户(例如,用户甲)持有计算机设备,通过该计算机设备可以与目标用户(例如,用户乙)持有的目标用户设备进行通信,如通过有线或者无线的方式建立计算机设备与目标用户设备之间的通信连接,或者经由网络设备进行两者之间数据通信传输等。所述计算机设备包括顶部摄像装置,该顶部摄像装置用于采集当前用户的操作对象相关图像信息,例如,顶部摄像装置从操作对象的上方采集关于操作对象相关的图像信息(例如,正上方或斜上方等),在一些情形下,所述顶部摄像装置设置于操作对象正上方,顶部摄像装置的光轴能够穿过操作对象的形体或者中心等。当然,为了使顶部摄像装置能够满足要求,计算机设备通常设置有 向前的延伸杆,并在延伸杆上将顶部摄像装置正对下方设置,采集下方的图像信息,在该延伸杆下方、计算机设备与所属用户之间位置,设置有对应操作对象的操作区域,该操作区域可以是计算机设备本身在下方的延伸区域(例如,通过计算机设备本身自带的延伸区域从而能够较为精准获取顶部图像信息),或者通过在计算机设备与所属用户之间设置空白区域作为对应的操作区域(例如,设置空白桌面作为操作区域等)。当然,在一些情形下,计算机设备的底座保持水平状态,从而保持计算机设备的稳定,相应地,对应顶部摄像装置的光轴垂直向下,所述操作区域所在面保持水平,从而使得计算机设备采集的顶部图像信息中操作对象各个区域相对于顶部摄像装置的距离相近。For example, the current user (for example, user A) holds a computer device, through which the computer device can communicate with the target user device held by the target user (for example, user B), such as establishing a connection between the computer device and the target through a wired or wireless manner. Communication connection between user equipment, or data communication and transmission between the two via network equipment. The computer equipment includes a top camera device, which is used to collect image information related to the operating object of the current user. For example, the top camera device collects image information related to the operating object from above the operating object (for example, directly above or diagonally). above, etc.), in some cases, the top camera device is disposed directly above the operation object, and the optical axis of the top camera device can pass through the shape or center of the operation object, etc. Of course, in order for the top camera device to meet the requirements, the computer equipment is usually provided with a forward extension rod, and the top camera device is placed directly below on the extension rod to collect the image information below. Under the extension rod, the computer equipment There is an operation area corresponding to the operation object between the user and the operation area. The operation area can be an extension area below the computer device itself (for example, the top image information can be obtained more accurately through the extension area of the computer device itself) , or by setting a blank area between the computer device and the user as the corresponding operation area (for example, setting a blank desktop as the operation area, etc.). Of course, in some cases, the base of the computer equipment remains horizontal to maintain the stability of the computer equipment. Correspondingly, the optical axis corresponding to the top camera device is vertically downward, and the surface where the operating area is located remains horizontal, thereby allowing the computer equipment to capture The distances between each area of the operation object in the top image information relative to the top camera device are similar.
所述计算机设备通过顶部摄像装置采集关于操作对象的顶部图像信息,并将该顶部图像信息直接发送或者经由网络设备发送至目标用户设备。在一些实施例中,计算机设备可以先对顶部图像信息进行识别(例如,预先设置的模板特征,通过对应模板特征进行识别和跟踪)等,从而确保该顶部图像信息中存在对应操作对象。若当前顶部图像信息中不存在操作对象,计算机设备可以调节该顶部摄像装置的摄像角度,采集其他区域的图像,从而确保顶部图像信息中存在操作对象,例如,通过调整延伸杆的延伸角度、高度或者直接调整顶部摄像装置的摄像位姿信息,从而达到改变顶部摄像装置的摄像角度的效果。若采集各个角度后一直不存在操作对象或者连续采集一定数量的顶部图像信息均不存在操作对象,则计算机设备呈现对应提示信息,用于提示当前用户当前操作区域不存在操作对象等。若当前顶部图像信息中存在操作对象,则将该顶部图像信息传输至目标用户设备。所述目标用户设备接收到顶部图像信息后,呈现该顶部图像信息,例如,通过显示屏显示该顶部图像信息或者通过投影仪投影显示该顶部图像信息。The computer device collects top image information about the operating object through the top camera device, and sends the top image information directly or via a network device to the target user device. In some embodiments, the computer device may first identify the top image information (for example, identify and track preset template features through corresponding template features), etc., thereby ensuring that the corresponding operation object exists in the top image information. If there is no operation object in the current top image information, the computer device can adjust the camera angle of the top camera device and collect images from other areas to ensure that there is an operation object in the top image information, for example, by adjusting the extension angle and height of the extension rod. Or directly adjust the camera pose information of the top camera device, thereby achieving the effect of changing the camera angle of the top camera device. If there is no operation object after collecting all angles or there is no operation object after continuously collecting a certain amount of top image information, the computer device displays corresponding prompt information to remind the current user that there is no operation object in the current operation area. If there is an operation object in the current top image information, the top image information is transmitted to the target user device. After receiving the top image information, the target user equipment presents the top image information, for example, displays the top image information through a display screen or displays the top image information through projection through a projector.
所述目标用户设备接收并呈现该顶部图像信息,供目标用户基于顶部图像信息进行操作交互等。所述目标用户设备在呈现顶部图像信息的同时,还能够通过输入装置采集目标用户关于顶部图像信息中交互对象的标注信息,该交互对象的交互位置可以是预先确定,也可以是当前用户确定并传输至目标用户设备,还可以是基于目标用户的操作位置(例如,触摸位置、光标所处位置、手势识别结果或者语音识别结果等)确定等。若所述顶部图像信息中交互位置是预先确定或者当前用户确定的,则目标用户设备直接将该标注信息传输至计算机设备;若所述顶部图像信息中 交互位置是目标用户的操作位置确定,则目标用户设备将该交互位置确定为标注位置信息,并结合标注信息生成对应图像标注信息,将该图像标注信息返回至计算机设备等。如在一些实施方式中,所述方法还包括步骤S204(未示出),在步骤S204中,基于所述第一用户操作确定对应标注位置信息;其中,在步骤S203中,将所述标注信息及所述标注位置信息返回至所述计算机设备,供所述计算机设备基于所述标注位置信息通过对应投影装置呈现所述标注信息。The target user device receives and presents the top image information for the target user to perform operations and interactions based on the top image information. While presenting the top image information, the target user device can also collect the target user's annotation information about the interactive object in the top image information through the input device. The interactive position of the interactive object may be predetermined, or may be determined and determined by the current user. The transmission to the target user device may also be determined based on the target user's operation position (for example, touch position, cursor position, gesture recognition result or voice recognition result, etc.). If the interaction position in the top image information is predetermined or determined by the current user, the target user device directly transmits the annotation information to the computer device; if the interaction position in the top image information is determined by the target user's operation position, then The target user device determines the interaction position as the annotation position information, combines the annotation information to generate corresponding image annotation information, and returns the image annotation information to the computer device or the like. As in some embodiments, the method further includes step S204 (not shown). In step S204, corresponding annotation location information is determined based on the first user operation; wherein, in step S203, the annotation information is and the annotation location information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device based on the annotation location information.
例如,所述顶部图像信息包括基于该顶部图像信息建立的图像坐标系,该图像坐标系以某个像素点(例如,图像左上角的像素点等)为原点,并以横轴作为X轴、纵轴作为Y轴建立对应图像/像素坐标系等,对应标注位置信息包括对应标注信息或标注信息的交互对象在图像坐标系中的坐标位置信息,该坐标位置信息可以是用于指示标注信息或标注信息的交互对象的中心位置或者区域范围的坐标集合等。所述标注位置信息可以是基于当前用户、目标用户的用户操作确定,也可以是预先设定的。对应标注信息由目标用户设备基于目标用户关于顶部图像的第一用户操作确定,例如,基于目标用户关于鼠标输入、键盘输入、触摸屏输入、手势输入或者语音输入操作等方式添加的标记信息确定对应标注信息;在一些情形下,该标注位置信息基于对应鼠标的点击位置、键盘输入对应位置、触摸屏的触摸位置、手势识别、语音识别结果在顶部图像信息中图像坐标信息确定,或者基于该鼠标的点击位置、键盘输入对应位置、触摸屏的触摸位置、手势识别、语音识别结果先在顶部图像信息中确定对应交互对象,再基于交互对象确定对应的图像标注位置等。For example, the top image information includes an image coordinate system established based on the top image information. The image coordinate system uses a certain pixel point (for example, the pixel point in the upper left corner of the image, etc.) as the origin, and uses the horizontal axis as the X-axis, The vertical axis is used as the Y axis to establish a corresponding image/pixel coordinate system, etc. The corresponding annotation position information includes the coordinate position information of the corresponding annotation information or the interactive object of the annotation information in the image coordinate system. The coordinate position information can be used to indicate the annotation information or Mark the center position of the interactive object of the information or the coordinate set of the area range, etc. The annotation location information may be determined based on user operations of the current user and the target user, or may be preset. The corresponding annotation information is determined by the target user device based on the first user operation of the target user on the top image. For example, the corresponding annotation is determined based on the mark information added by the target user on mouse input, keyboard input, touch screen input, gesture input, or voice input operation. Information; in some cases, the annotation position information is determined based on the image coordinate information in the top image information based on the corresponding mouse click position, keyboard input corresponding position, touch screen touch position, gesture recognition, and voice recognition results, or based on the mouse click The position, corresponding position of keyboard input, touch position of the touch screen, gesture recognition, and voice recognition results are first determined in the top image information, and then the corresponding image annotation position is determined based on the interactive object.
所述标注信息包括但不限于关于顶部图像信息中交互对象的贴纸、文字、图形、视频、涂鸦、2D标记或者3D标记等标记信息,在一些实施例中,基于不同类型的标注信息,对应图像位置信息的表现形式也存在区别。The annotation information includes but is not limited to tag information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks about interactive objects in the top image information. In some embodiments, based on different types of annotation information, the corresponding image There are also differences in the presentation of location information.
例如,所述投影装置被安置于顶部摄像装置附近,如对应投影机与顶部摄像头一同安置于顶部延伸杆上等。对应投影装置与顶部摄像装置的映射关系可以通过计算获得,基于投影装置与顶部摄像装置的映射关系,可以将所述标注位置信息换算到投影位置信息,如标注信息在投影图像对应投影图像坐标系中的投影坐标信息,其中,投影位置信息仅为举例,在此不做限定。计算机设备根据对应投影坐标信息将标注信息投影至对应操作区域,从而将标注信息呈现于操作对象中对应区域。For example, the projection device is placed near the top camera device, for example, the corresponding projector and the top camera are placed on the top extension rod. The mapping relationship between the corresponding projection device and the top camera device can be obtained through calculation. Based on the mapping relationship between the projection device and the top camera device, the annotation position information can be converted to projection position information. For example, the annotation information is in the coordinate system of the projection image corresponding to the projection image. The projection coordinate information in , where the projection position information is only an example and is not limited here. The computer device projects the annotation information to the corresponding operation area according to the corresponding projection coordinate information, thereby presenting the annotation information in the corresponding area in the operation object.
上文主要对本申请一种投影交互方法的各实施例进行介绍,此外,本申请还提 供了能够实施上述各实施例的计算机设备和目标用户设备,下面我们结合图4、图5进行介绍。The above mainly introduces various embodiments of a projection interaction method of the present application. In addition, the present application also provides computer equipment and target user equipment that can implement the above embodiments. We will introduce them below with reference to Figures 4 and 5.
图4示出根据本申请一个方面的一种投影交互的计算机设备100,该设备包括一一模块101、一二模块102以及一三模块103。一一模块101,用于通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息;一二模块102,用于获取所述目标用户设备对应目标用户的、关于所述顶部图像信息的图像标注信息,其中,所述图像标注信息包括对应标注信息及所述标注信息的标注位置信息,所述标注信息由所述目标用户的第一用户操作确定;一三模块103,用于基于所述标注位置信息确定对应投影位置信息,并基于所述投影位置信息投影呈现所述标注信息。Figure 4 shows a projection interactive computer device 100 according to one aspect of the present application. The device includes a first module 101, a second module 102 and a third module 103. 1. Module 101, configured to collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information; 1. 2 modules 102. Used to obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the annotation The information is determined by the first user operation of the target user; a third module 103 is configured to determine corresponding projection position information based on the annotation position information, and project and present the annotation information based on the projection position information.
在此,所述图4示出的一一模块101、一二模块102以及一三模块103对应的具体实施方式与前述图2示出的步骤S101、步骤S102以及步骤S103的实施例相同或相似,因而不再赘述,以引用的方式包含于此。Here, the specific implementation manners corresponding to the first module 101, the first second module 102 and the first third module 103 shown in Figure 4 are the same or similar to the embodiments of step S101, step S102 and step S103 shown in Figure 2. , and therefore will not be described again, but is included here by reference.
在一些实施方式中,所述计算机设备还包括显示装置,所述设备还包括一四模块(未示出),用于接收所述目标用户设备传输的目标图像信息,通过所述显示装置呈现所述目标图像信息。In some embodiments, the computer device further includes a display device, and the device further includes a four-module (not shown) for receiving the target image information transmitted by the target user device, and presenting the information through the display device. Describe the target image information.
在一些实施方式中,所述计算机设备还包括前置摄像装置,其中,所述前置摄像装置用于采集关于所述计算机设备对应当前用户的前置图像信息;其中,所述设备还包括一五模块(未示出),用于将所述前置图像信息传输至所述目标用户设备,供所述目标用户设备呈现所述前置图像信息。In some embodiments, the computer device further includes a front-facing camera device, wherein the front-facing camera device is used to collect front-facing image information about the current user of the computer device; wherein the device further includes a Five modules (not shown), configured to transmit the pre-image information to the target user equipment so that the target user equipment can present the pre-image information.
在一些实施方式中,所述设备还包括一六模块(未示出),用于获取关于所述计算机设备与所述目标用户设备的当前视频交互的摄像切换请求,其中,所述当前视频交互的图像信息包括所述前置图像信息;其中,一一模块101,用于响应于所述摄像切换请求,关闭所述前置摄像装置并启用所述顶部摄像装置,通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息。In some implementations, the device further includes a module (not shown) for obtaining a camera switching request regarding a current video interaction between the computer device and the target user device, wherein the current video interaction The image information includes the front image information; wherein, a module 101 is used to respond to the camera switching request, turn off the front camera device and enable the top camera device, and collect data through the top camera device The corresponding top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information.
在一些实施方式中,一五模块,用于接收所述目标用户设备传输的、关于所述计算机设备与目标用户设备的当前视频交互的摄像切换请求,其中,所述摄像切换请求基于所述目标用户的第二用户操作确定。In some implementations, a module is configured to receive a camera switching request transmitted by the target user equipment regarding the current video interaction between the computer device and the target user equipment, wherein the camera switching request is based on the target user equipment. The user's second user operation is determined.
在一些实施方式中,一五模块,用于获取关于所述当前视频交互的视频建立请求;响应于所述视频建立请求,基于所述视频建立请求建立所述计算机设备与所述目标用户设备的所述当前视频交互,通过所述前置摄像装置采集对应前置图像信息并传输至所述目标用户设备,供所述目标用户设备呈现所述前置图像信息。In some implementations, a module is configured to obtain a video establishment request regarding the current video interaction; in response to the video establishment request, establish a connection between the computer device and the target user equipment based on the video establishment request. For the current video interaction, the corresponding front image information is collected through the front camera device and transmitted to the target user equipment, so that the target user equipment can present the front image information.
在一些实施方式中,所述设备还包括一七模块(未示出),用于获取关于所述计算机设备与目标用户设备的当前视频交互的摄像还原请求;响应于所述摄像还原请求,关闭所述顶部摄像图像并启用所述前置摄像装置,通过所述前置摄像装置采集对应前置图像信息,将所述前置图像信息传输至对应所述目标用户设备。In some implementations, the device further includes a module (not shown) for obtaining a camera restoration request regarding the current video interaction between the computer device and the target user device; in response to the camera restore request, close The top camera images and activates the front camera device, collects corresponding front image information through the front camera device, and transmits the front image information to the corresponding target user equipment.
在一些实施方式中,所述设备还包括一八模块(未示出),用于获取关于所述计算机设备与目标用户设备的视频交互的摄像开启请求;其中,一一模块101,用于响应于所述摄像开启请求,通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备;一五模块,用于响应于所述摄像开启请求,通过所述前置摄像装置采集对应的前置图像信息,将所述前置图像信息传输至所述目标用户设备。In some implementations, the device further includes a module (not shown) for obtaining a camera start request regarding the video interaction between the computer device and the target user device; wherein, a module 101 is used to respond In response to the camera start request, collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment; a fifth module, configured to respond to the camera start request, through the The front camera device collects corresponding front image information and transmits the front image information to the target user equipment.
在一些实施方式中,所述计算机设备包括照明装置;其中,所述设备还包括一九模块(未示出),用于若获取到关于所述照明装置的启用请求,开启所述照明装置。In some implementations, the computer device includes a lighting device; wherein the device further includes a module (not shown) configured to turn on the lighting device if an enabling request for the lighting device is obtained.
在一些实施方式中,所述计算机设备还包括环境光检测装置,所述设备还包括一十模块(未示出),用于基于所述环境光检测装置获取当前环境的光照强度信息,检测所述光照强度信息是否满足预设照明阈值;若不满足,则调节所述照明装置,直至所述光照强度信息满足所述预设照明阈值。In some embodiments, the computer device further includes an ambient light detection device, and the device further includes a module (not shown) for obtaining light intensity information of the current environment based on the ambient light detection device, and detecting the Whether the illumination intensity information satisfies the preset illumination threshold; if not, adjust the lighting device until the illumination intensity information satisfies the preset illumination threshold.
在一些实施方式中,所述设备还包括一十一模块(未示出),用于基于所述顶部图像信息确定所述顶部图像信息对应图像亮度信息,检测所述图像亮度信息是否满足预设亮度阈值;若不满足,则调节所述照明装置,直至所述图像亮度信息满足所述预设亮度阈值。In some embodiments, the device further includes an eleven module (not shown) for determining image brightness information corresponding to the top image information based on the top image information, and detecting whether the image brightness information satisfies a preset The brightness threshold; if it is not met, adjust the lighting device until the image brightness information meets the preset brightness threshold.
在一些实施方式中,所述设备还包括一十二模块(未示出),用于获取所述当前用户关于所述顶部图像信息的交互位置信息;基于所述交互位置信息确定对应的虚拟呈现信息,并通过所述投影装置投影呈现所述虚拟呈现信息。In some implementations, the device further includes a module (not shown) for obtaining the interactive position information of the current user regarding the top image information; and determining the corresponding virtual presentation based on the interactive position information. information, and the virtual presentation information is projected and presented through the projection device.
在一些实施方式中,所述计算机设备包括红外测量装置;其中,所述获取所述 当前用户关于所述顶部图像信息的交互位置信息,包括:通过所述红外测量装置确定所述顶部图像信息的交互位置信息。In some embodiments, the computer device includes an infrared measurement device; wherein the obtaining the interactive position information of the current user with respect to the top image information includes: determining the position of the top image information through the infrared measurement device. Interactive location information.
在一些实施方式中,一二模块102,用于将所述交互位置信息发送至所述目标用户设备,接收所述目标用户设备传输的、基于所述交互位置信息返回的标注信息,从而确定所述顶部图像信息的图像标注信息,其中,所述图像标注信息包括所述标注信息及所述交互位置信息,所述标注信息由所述目标用户的第一用户操作确定。In some implementations, the first and second modules 102 are configured to send the interactive location information to the target user equipment, receive the annotation information transmitted by the target user equipment and returned based on the interactive location information, thereby determining the The image annotation information of the top image information, wherein the image annotation information includes the annotation information and the interactive position information, and the annotation information is determined by the first user operation of the target user.
在一些实施方式中,所述计算机设备还包括距离测量装置;其中,所述设备还包括一十三模块(未示出),用于通过所述距离测量装置确定所述当前用户与所述计算机设备之间的距离信息,若所述距离信息不满足预设距离阈值,所述计算机设备发出通知。In some embodiments, the computer device further includes a distance measurement device; wherein the device further includes a thirteenth module (not shown) for determining the distance between the current user and the computer through the distance measurement device. Distance information between devices. If the distance information does not meet the preset distance threshold, the computer device sends a notification.
在此,所述一四模块至一十三模块对应的具体实施方式与前述步骤S104至步骤S113的实施例相同或相似,因而不再赘述,以引用的方式包含于此。Here, the corresponding specific implementations of the four to thirteen modules are the same or similar to the foregoing embodiments of steps S104 to S113, and therefore will not be described again, but are included here by reference.
图5示出根据本申请一个方面的一种投影交互的目标用户设备200,主要包括二一模块201、二二模块202以及二三模块203。二一模块201,用于接收并呈现对应计算机设备传输的、通过对应顶部摄像装置采集的顶部图像信息;二二模块202,用于获取目标用户设备对应目标用户的第一用户操作,基于所述第一用户操作生成关于所述顶部图像信息的标注信息;二三模块203,用于将所述标注信息返回至所述计算机设备,供所述计算机设备通过对应投影装置呈现所述标注信息。Figure 5 shows a target user device 200 for projection interaction according to one aspect of the present application, which mainly includes a two-one module 201, a two-two module 202 and a two-three module 203. The second module 201 is used to receive and present the top image information transmitted by the corresponding computer device and collected by the corresponding top camera; the second module 202 is used to obtain the first user operation of the target user corresponding to the target user equipment, based on the The first user operation generates annotation information about the top image information; the second and third modules 203 are used to return the annotation information to the computer device for the computer device to present the annotation information through the corresponding projection device.
在此,所述图5示出的二一模块201、二二模块202以及二三模块203对应的具体实施方式与前述图3示出的步骤S201、步骤S202以及步骤S203的实施例相同或相似,因而不再赘述,以引用的方式包含于此。Here, the specific implementations corresponding to the second one module 201, the second second module 202 and the second third module 203 shown in FIG. 5 are the same or similar to the embodiments of step S201, step S202 and step S203 shown in FIG. 3 , and therefore will not be described again, but is included here by reference.
在一些实施方式中,所述设备还包括二四模块(未示出),用于基于所述第一用户操作确定对应标注位置信息;其中,二三模块203,用于将所述标注信息及所述标注位置信息返回至所述计算机设备,供所述计算机设备基于所述标注位置信息通过对应投影装置呈现所述标注信息。In some embodiments, the device further includes a second and fourth module (not shown), used to determine the corresponding annotation location information based on the first user operation; wherein the second and third module 203 is used to combine the annotation information and The annotation location information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device based on the annotation location information.
在此,所述二四模块对应的具体实施方式与前述步骤S204的实施例相同或相似,因而不再赘述,以引用的方式包含于此。Here, the specific implementation manner corresponding to the two or four modules is the same as or similar to the foregoing embodiment of step S204, and therefore will not be described again, but is included here by reference.
除上述各实施例介绍的方法和设备外,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机代码,当所述计算机代码被执行时,如 前任一项所述的方法被执行。In addition to the methods and devices introduced in the above embodiments, the present application also provides a computer-readable storage medium that stores computer code. When the computer code is executed, as in the previous item The method described is executed.
本申请还提供了一种计算机程序产品,当所述计算机程序产品被计算机设备执行时,如前任一项所述的方法被执行。This application also provides a computer program product. When the computer program product is executed by a computer device, the method described in the previous item is executed.
本申请还提供了一种计算机设备,所述计算机设备包括:This application also provides a computer device, which includes:
一个或多个处理器;one or more processors;
存储器,用于存储一个或多个计算机程序;Memory for storing one or more computer programs;
当所述一个或多个计算机程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如前任一项所述的方法。When the one or more computer programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any one of the preceding items.
图6示出了可被用于实施本申请中所述的各个实施例的示例性系统;6 illustrates an exemplary system that may be used to implement various embodiments described in this application;
如图6所示在一些实施例中,系统300能够作为各所述实施例中的任意一个上述设备。在一些实施例中,系统300可包括具有指令的一个或多个计算机可读介质(例如,系统存储器或NVM/存储设备320)以及与该一个或多个计算机可读介质耦合并被配置为执行指令以实现模块从而执行本申请中所述的动作的一个或多个处理器(例如,(一个或多个)处理器305)。As shown in Figure 6, in some embodiments, the system 300 can serve as any of the above-mentioned devices in each of the described embodiments. In some embodiments, system 300 may include one or more computer-readable media (eg, system memory or NVM/storage device 320) having instructions coupled thereto and configured to perform Instructions are provided to implement means for one or more processors (eg, processor(s) 305) to perform the actions described herein.
对于一个实施例,系统控制模块310可包括任意适当的接口控制器,以向(一个或多个)处理器305中的至少一个和/或与系统控制模块310通信的任意适当的设备或组件提供任意适当的接口。For one embodiment, system control module 310 may include any suitable interface controller to provide information to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310 Any appropriate interface.
系统控制模块310可包括存储器控制器模块330,以向系统存储器315提供接口。存储器控制器模块330可以是硬件模块、软件模块和/或固件模块。System control module 310 may include a memory controller module 330 to provide an interface to system memory 315 . Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
系统存储器315可被用于例如为系统300加载和存储数据和/或指令。对于一个实施例,系统存储器315可包括任意适当的易失性存储器,例如,适当的DRAM。在一些实施例中,系统存储器315可包括双倍数据速率类型四同步动态随机存取存储器(DDR4SDRAM)。 System memory 315 may be used, for example, to load and store data and/or instructions for system 300 . For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, system memory 315 may include double data rate type quad synchronous dynamic random access memory (DDR4SDRAM).
对于一个实施例,系统控制模块310可包括一个或多个输入/输出(I/O)控制器,以向NVM/存储设备320及(一个或多个)通信接口325提供接口。For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide interfaces to NVM/storage device 320 and communication interface(s) 325 .
例如,NVM/存储设备320可被用于存储数据和/或指令。NVM/存储设备320可包括任意适当的非易失性存储器(例如,闪存)和/或可包括任意适当的(一个或多个)非易失性存储设备(例如,一个或多个硬盘驱动器(HDD)、一个或多个光盘(CD)驱动器和/或一个或多个数字通用光盘(DVD)驱动器)。For example, NVM/storage device 320 may be used to store data and/or instructions. NVM/storage device 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard drives (e.g., one or more hard drives) HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
NVM/存储设备320可包括在物理上作为系统300被安装在其上的设备的一部分的存储资源,或者其可被该设备访问而不必作为该设备的一部分。例如,NVM/存储设备320可通过网络经由(一个或多个)通信接口325进行访问。NVM/storage device 320 may include storage resources that are physically part of the device on which system 300 is installed, or that may be accessed by the device without necessarily being part of the device. For example, NVM/storage device 320 may be accessed over the network via communication interface(s) 325 .
(一个或多个)通信接口325可为系统300提供接口以通过一个或多个网络和/或与任意其他适当的设备通信。系统300可根据一个或多个无线网络标准和/或协议中的任意标准和/或协议来与无线网络的一个或多个组件进行无线通信。Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器(例如,存储器控制器模块330)的逻辑封装在一起。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑封装在一起以形成系统级封装(SiP)。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑集成在同一模具上。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑集成在同一模具上以形成片上系统(SoC)。For one embodiment, at least one of the processor(s) 305 may be packaged with the logic of one or more controllers of the system control module 310 (eg, memory controller module 330). For one embodiment, at least one of the processor(s) 305 may be packaged together with the logic of one or more controllers of the system control module 310 to form a system-in-package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die as the logic of the one or more controllers of the system control module 310 . For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with the logic of one or more controllers of the system control module 310 to form a system on a chip (SoC).
在各个实施例中,系统300可以但不限于是:服务器、工作站、台式计算设备或移动计算设备(例如,膝上型计算设备、手持计算设备、平板电脑、上网本等)。在各个实施例中,系统300可具有更多或更少的组件和/或不同的架构。例如,在一些实施例中,系统300包括一个或多个摄像机、键盘、液晶显示器(LCD)屏幕(包括触屏显示器)、非易失性存储器端口、多个天线、图形芯片、专用集成电路(ASIC)和扬声器。In various embodiments, system 300 may be, but is not limited to, a server, a workstation, a desktop computing device, or a mobile computing device (eg, laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or a different architecture. For example, in some embodiments, system 300 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
需要注意的是,本申请可在软件和/或软件与硬件的组合体中被实施,例如,可采用专用集成电路(ASIC)、通用目的计算机或任何其他类似硬件设备来实现。在一个实施例中,本申请的软件程序可以通过处理器执行以实现上文所述步骤或功能。同样地,本申请的软件程序(包括相关的数据结构)可以被存储到计算机可读记录介质中,例如,RAM存储器,磁或光驱动器或软磁盘及类似设备。另外,本申请的一些步骤或功能可采用硬件来实现,例如,作为与处理器配合从而执行各个步骤或功能的电路。It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, using an application specific integrated circuit (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present application can be executed by a processor to implement the steps or functions described above. Likewise, the software program of the present application (including related data structures) may be stored in a computer-readable recording medium, such as a RAM memory, a magnetic or optical drive or a floppy disk and similar devices. In addition, some steps or functions of the present application may be implemented using hardware, for example, as a circuit that cooperates with a processor to perform each step or function.
另外,本申请的一部分可被应用为计算机程序产品,例如计算机程序指令,当其被计算机执行时,通过该计算机的操作,可以调用或提供根据本申请的方法和/或技术方案。本领域技术人员应能理解,计算机程序指令在计算机可读介质中的存 在形式包括但不限于源文件、可执行文件、安装包文件等,相应地,计算机程序指令被计算机执行的方式包括但不限于:该计算机直接执行该指令,或者该计算机编译该指令后再执行对应的编译后程序,或者该计算机读取并执行该指令,或者该计算机读取并安装该指令后再执行对应的安装后程序。在此,计算机可读介质可以是可供计算机访问的任意可用的计算机可读存储介质或通信介质。In addition, part of the present application may be applied as a computer program product, such as computer program instructions. When executed by a computer, through the operation of the computer, methods and/or technical solutions according to the present application may be invoked or provided. Those skilled in the art will understand that the form in which computer program instructions exist in a computer-readable medium includes but is not limited to source files, executable files, installation package files, etc. Correspondingly, the manner in which computer program instructions are executed by a computer includes but is not limited to Limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. program. Here, the computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by the computer.
通信介质包括藉此包含例如计算机可读指令、数据结构、程序模块或其他数据的通信信号被从一个系统传送到另一系统的介质。通信介质可包括有导的传输介质(诸如电缆和线(例如,光纤、同轴等))和能传播能量波的无线(未有导的传输)介质,诸如声音、电磁、RF、微波和红外。计算机可读指令、数据结构、程序模块或其他数据可被体现为例如无线介质(诸如载波或诸如被体现为扩展频谱技术的一部分的类似机制)中的已调制数据信号。术语“已调制数据信号”指的是其一个或多个特征以在信号中编码信息的方式被更改或设定的信号。调制可以是模拟的、数字的或混合调制技术。Communication media includes the medium whereby communication signals containing, for example, computer readable instructions, data structures, program modules or other data are transmitted from one system to another system. Communication media may include conducted transmission media, such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (unguided transmission) media that can propagate energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared . Computer readable instructions, data structures, program modules, or other data may be embodied, for example, as a modulated data signal in a wireless medium, such as a carrier wave or a similar mechanism such as that embodied as part of spread spectrum technology. The term "modulated data signal" refers to a signal in which one or more characteristics are altered or set in a manner that encodes information in the signal. Modulation can be analog, digital or hybrid modulation techniques.
作为示例而非限制,计算机可读存储介质可包括以用于存储诸如计算机可读指令、数据结构、程序模块或其它数据的信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动的介质。例如,计算机可读存储介质包括,但不限于,易失性存储器,诸如随机存储器(RAM,DRAM,SRAM);以及非易失性存储器,诸如闪存、各种只读存储器(ROM,PROM,EPROM,EEPROM)、磁性和铁磁/铁电存储器(MRAM,FeRAM);以及磁性和光学存储设备(硬盘、磁带、CD、DVD);或其它现在已知的介质或今后开发的能够存储供计算机系统使用的计算机可读信息/数据。By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable, storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Removable and non-removable media. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM ,EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other media now known or developed in the future that can be stored for computer systems Computer readable information/data used.
在此,根据本申请的一个实施例包括一个装置,该装置包括用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被该处理器执行时,触发该装置运行基于前述根据本申请的多个实施例的方法和/或技术方案。Here, one embodiment according to the present application includes a device, the device includes a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, triggering The device operates based on the aforementioned methods and/or technical solutions according to multiple embodiments of the present application.
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标 记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。装置权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。It is obvious to those skilled in the art that the present application is not limited to the details of the above-described exemplary embodiments, and that the present application can be implemented in other specific forms without departing from the spirit or essential characteristics of the present application. Therefore, the embodiments should be regarded as illustrative and non-restrictive from any point of view, and the scope of the application is defined by the appended claims rather than the above description, and it is therefore intended that all claims falling within the claims All changes within the meaning and scope of the equivalent elements are included in this application. Any reference signs in the claims shall not be construed as limiting the claim concerned. Furthermore, it is clear that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. Multiple units or means stated in a device claim can also be implemented by one unit or means by software or hardware. Words such as first and second are used to indicate names and do not indicate any specific order.

Claims (22)

  1. 一种投影交互方法,其中,该方法应用于计算机设备,该计算机设备包括顶部摄像装置以及投影装置,该方法包括:A projection interaction method, wherein the method is applied to computer equipment, the computer equipment includes a top camera device and a projection device, the method includes:
    通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息;Collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information;
    获取所述目标用户设备对应目标用户的、关于所述顶部图像信息的图像标注信息,其中,所述图像标注信息包括对应标注信息及所述标注信息的标注位置信息,所述标注信息由所述目标用户的第一用户操作确定;Obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the annotation information is composed of the The first user operation of the target user is determined;
    基于所述标注位置信息确定对应投影位置信息,并基于所述投影位置信息投影呈现所述标注信息。Corresponding projection position information is determined based on the annotation position information, and the annotation information is projected and presented based on the projection position information.
  2. 根据权利要求1所述的方法,其中,所述计算机设备还包括显示装置,所述方法还包括:The method of claim 1, wherein the computer device further includes a display device, the method further comprising:
    接收所述目标用户设备传输的目标图像信息,通过所述显示装置呈现所述目标图像信息。Receive the target image information transmitted by the target user equipment, and present the target image information through the display device.
  3. 根据权利要求1或2所述的方法,其中,所述计算机设备还包括前置摄像装置,其中,所述前置摄像装置用于采集关于所述计算机设备对应当前用户的前置图像信息;其中,所述方法还包括:The method according to claim 1 or 2, wherein the computer equipment further includes a front-facing camera device, wherein the front-facing camera device is used to collect front-facing image information about the current user of the computer device; wherein , the method also includes:
    将所述前置图像信息传输至所述目标用户设备,供所述目标用户设备呈现所述前置图像信息。The pre-image information is transmitted to the target user equipment for the target user equipment to present the pre-image information.
  4. 根据权利要求3所述的方法,其中,所述方法还包括:The method of claim 3, further comprising:
    获取关于所述计算机设备与所述目标用户设备的当前视频交互的摄像切换请求,其中,所述当前视频交互的图像信息包括所述前置图像信息;Obtain a camera switching request regarding the current video interaction between the computer device and the target user device, wherein the image information of the current video interaction includes the front image information;
    其中,所述通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息,包括:Wherein, collecting the corresponding top image information through the top camera device and transmitting the top image information to the corresponding target user equipment for the target user equipment to present the top image information includes:
    响应于所述摄像切换请求,关闭所述前置摄像装置并启用所述顶部摄像装置,通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息。In response to the camera switching request, turn off the front camera device and enable the top camera device, collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment, The top image information is presented to the target user device.
  5. 根据权利要求4所述的方法,其中,所述获取关于所述计算机设备与所述目标 用户设备的当前视频交互的摄像切换请求,包括:The method according to claim 4, wherein the obtaining a camera switching request regarding the current video interaction between the computer device and the target user device includes:
    接收所述目标用户设备传输的、关于所述计算机设备与目标用户设备的当前视频交互的摄像切换请求,其中,所述摄像切换请求基于所述目标用户的第二用户操作确定。Receive a camera switching request transmitted by the target user equipment regarding the current video interaction between the computer device and the target user equipment, wherein the camera switching request is determined based on a second user operation of the target user.
  6. 根据权利要求3或4所述的方法,其中,所述将所述前置图像信息传输至所述目标用户设备,供所述目标用户设备呈现所述前置图像信息,包括:The method according to claim 3 or 4, wherein said transmitting the pre-image information to the target user device for the target user device to present the pre-image information includes:
    获取关于所述当前视频交互的视频建立请求;Obtain a video creation request regarding the current video interaction;
    响应于所述视频建立请求,基于所述视频建立请求建立所述计算机设备与所述目标用户设备的所述当前视频交互,通过所述前置摄像装置采集对应前置图像信息并传输至所述目标用户设备,供所述目标用户设备呈现所述前置图像信息。In response to the video creation request, establish the current video interaction between the computer device and the target user device based on the video creation request, collect corresponding front image information through the front camera device and transmit it to the Target user equipment, for the target user equipment to present the pre-image information.
  7. 根据权利要求4所述的方法,其中,所述方法还包括:The method of claim 4, further comprising:
    获取关于所述计算机设备与目标用户设备的当前视频交互的摄像还原请求;Obtain a camera restoration request regarding the current video interaction between the computer device and the target user device;
    响应于所述摄像还原请求,关闭所述顶部摄像图像并启用所述前置摄像装置,通过所述前置摄像装置采集对应前置图像信息,将所述前置图像信息传输至对应所述目标用户设备。In response to the camera restoration request, turn off the top camera image and enable the front camera device, collect corresponding front image information through the front camera device, and transmit the front image information to the corresponding target User equipment.
  8. 根据权利要求3所述的方法,其中,所述方法还包括:The method of claim 3, further comprising:
    获取关于所述计算机设备与目标用户设备的视频交互的摄像开启请求;Obtain a camera start request regarding the video interaction between the computer device and the target user device;
    其中,所述通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息,包括:Wherein, collecting the corresponding top image information through the top camera device and transmitting the top image information to the corresponding target user equipment for the target user equipment to present the top image information includes:
    响应于所述摄像开启请求,通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备;In response to the camera start request, collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment;
    其中,所述将所述前置图像信息传输至所述目标用户设备,供所述目标用户设备呈现所述前置图像信息,包括:Wherein, transmitting the pre-image information to the target user equipment for the target user equipment to present the pre-image information includes:
    响应于所述摄像开启请求,通过所述前置摄像装置采集对应的前置图像信息,将所述前置图像信息传输至所述目标用户设备。In response to the camera start request, the corresponding front image information is collected through the front camera device, and the front image information is transmitted to the target user equipment.
  9. 根据权利要求1所述的方法,其中,所述计算机设备包括照明装置;其中,所述方法还包括:The method of claim 1, wherein the computer device includes a lighting device; wherein the method further includes:
    若获取到关于所述照明装置的启用请求,开启所述照明装置。If an activation request for the lighting device is obtained, the lighting device is turned on.
  10. 根据权利要求9所述的方法,其中,所述计算机设备还包括环境光检测装置,所述方法还包括:The method of claim 9, wherein the computer device further includes an ambient light detection device, the method further comprising:
    基于所述环境光检测装置获取当前环境的光照强度信息,检测所述光照强度信息是否满足预设照明阈值;Obtain the light intensity information of the current environment based on the ambient light detection device, and detect whether the light intensity information meets a preset lighting threshold;
    若不满足,则调节所述照明装置,直至所述光照强度信息满足所述预设照明阈值。If not, the lighting device is adjusted until the light intensity information meets the preset lighting threshold.
  11. 根据权利要求9所述的方法,其中,所述方法还包括:The method of claim 9, further comprising:
    基于所述顶部图像信息确定所述顶部图像信息对应图像亮度信息,检测所述图像亮度信息是否满足预设亮度阈值;Determine the image brightness information corresponding to the top image information based on the top image information, and detect whether the image brightness information meets a preset brightness threshold;
    若不满足,则调节所述照明装置,直至所述图像亮度信息满足所述预设亮度阈值。If not, the lighting device is adjusted until the image brightness information meets the preset brightness threshold.
  12. 根据权利要求1所述的方法,其中,所述方法还包括:The method of claim 1, further comprising:
    获取所述当前用户关于所述顶部图像信息的交互位置信息;Obtain the interactive position information of the current user regarding the top image information;
    基于所述交互位置信息确定对应的虚拟呈现信息,并通过所述投影装置投影呈现所述虚拟呈现信息。Corresponding virtual presence information is determined based on the interactive position information, and the virtual presence information is projected and presented through the projection device.
  13. 根据权利要求12所述的方法,其中,所述计算机设备包括红外测量装置;其中,所述获取所述当前用户关于所述顶部图像信息的交互位置信息,包括:The method according to claim 12, wherein the computer device includes an infrared measurement device; wherein the obtaining the interactive position information of the current user with respect to the top image information includes:
    通过所述红外测量装置确定所述顶部图像信息的交互位置信息。The interactive position information of the top image information is determined by the infrared measurement device.
  14. 根据权利要求12或13所述的方法,其中,所述获取所述目标用户设备对应目标用户的、关于所述顶部图像信息的图像标注信息,包括:The method according to claim 12 or 13, wherein the obtaining the image annotation information about the top image information of the target user device corresponding to the target user includes:
    将所述交互位置信息发送至所述目标用户设备,接收所述目标用户设备传输的、基于所述交互位置信息返回的标注信息,从而确定所述顶部图像信息的图像标注信息,其中,所述图像标注信息包括所述标注信息及所述交互位置信息,所述标注信息由所述目标用户的第一用户操作确定。Send the interactive location information to the target user equipment, receive the annotation information transmitted by the target user equipment and returned based on the interactive location information, thereby determining the image annotation information of the top image information, wherein, The image annotation information includes the annotation information and the interactive position information, and the annotation information is determined by the first user operation of the target user.
  15. 根据权利要求1所述的方法,其中,所述计算机设备还包括距离测量装置;其中,所述方法还包括:The method of claim 1, wherein the computer device further includes a distance measurement device; wherein the method further includes:
    通过所述距离测量装置确定所述当前用户与所述计算机设备之间的距离信息,若所述距离信息不满足预设距离阈值,所述计算机设备发出通知。The distance information between the current user and the computer device is determined through the distance measurement device, and if the distance information does not meet a preset distance threshold, the computer device issues a notification.
  16. 一种投影交互方法,其中,应用于目标用户设备,该方法包括:A projection interaction method, which is applied to a target user device, the method includes:
    接收并呈现对应计算机设备传输的、通过对应顶部摄像装置采集的顶部图像信息;Receive and present the top image information transmitted by the corresponding computer equipment and collected by the corresponding top camera device;
    获取目标用户设备对应目标用户的第一用户操作,基于所述第一用户操作生成关于所述顶部图像信息的标注信息;Obtain the first user operation of the target user device corresponding to the target user, and generate annotation information about the top image information based on the first user operation;
    将所述标注信息返回至所述计算机设备,供所述计算机设备通过对应投影装置呈现 所述标注信息。The annotation information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device.
  17. 根据权利要求16所述的方法,其中,所述方法还包括:The method of claim 16, wherein the method further includes:
    基于所述第一用户操作确定对应标注位置信息;Determine corresponding annotation location information based on the first user operation;
    其中,所述将所述标注信息返回至所述计算机设备,供所述计算机设备通过对应投影装置呈现所述标注信息,包括:Wherein, returning the annotation information to the computer device for the computer device to present the annotation information through a corresponding projection device includes:
    将所述标注信息及所述标注位置信息返回至所述计算机设备,供所述计算机设备基于所述标注位置信息通过对应投影装置呈现所述标注信息。The annotation information and the annotation position information are returned to the computer device for the computer device to present the annotation information through a corresponding projection device based on the annotation position information.
  18. 一种投影交互的计算机设备,该计算机设备包括顶部摄像装置以及投影装置,该设备包括:A computer device for projection interaction. The computer device includes a top camera device and a projection device. The device includes:
    一一模块,用于通过所述顶部摄像装置采集对应的顶部图像信息,将所述顶部图像信息传输至对应的目标用户设备,供所述目标用户设备呈现所述顶部图像信息;A module configured to collect corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information;
    一二模块,用于获取所述目标用户设备对应目标用户的、关于所述顶部图像信息的图像标注信息,其中,所述图像标注信息包括对应标注信息及所述标注信息的标注位置信息,所述标注信息由所述目标用户的第一用户操作确定;One or two modules, configured to obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, so The annotation information is determined by the first user operation of the target user;
    一三模块,用于基于所述标注位置信息确定对应投影位置信息,并基于所述投影位置信息投影呈现所述标注信息。A third module, configured to determine corresponding projection position information based on the annotation position information, and project and present the annotation information based on the projection position information.
  19. 一种投影交互的目标用户设备,其中,该设备包括:A target user device for projection interaction, wherein the device includes:
    二一模块,用于接收并呈现对应计算机设备传输的、通过对应顶部摄像装置采集的顶部图像信息;The two-one module is used to receive and present the top image information transmitted by the corresponding computer equipment and collected by the corresponding top camera device;
    二二模块,用于获取目标用户设备对应目标用户的第一用户操作,基于所述第一用户操作生成关于所述顶部图像信息的标注信息;The second module is used to obtain the first user operation of the target user device corresponding to the target user, and generate annotation information about the top image information based on the first user operation;
    二三模块,用于将所述标注信息返回至所述计算机设备,供所述计算机设备通过对应投影装置呈现所述标注信息。The second and third modules are used to return the annotation information to the computer device, so that the computer device can present the annotation information through a corresponding projection device.
  20. 一种计算机设备,其中,该设备包括:A computer device, wherein the device includes:
    处理器;以及processor; and
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行如权利要求1至17中任一项所述方法的步骤。A memory arranged to store computer-executable instructions which, when executed, cause the processor to perform the steps of a method as claimed in any one of claims 1 to 17.
  21. 一种计算机可读存储介质,其上存储有计算机程序/指令,其特征在于,该计算机程序/指令在被执行时使得系统进行执行如权利要求1至17中任一项所述方法的步 骤。A computer-readable storage medium with a computer program/instruction stored thereon, characterized in that, when executed, the computer program/instruction causes the system to perform the steps of the method described in any one of claims 1 to 17.
  22. 一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现权利要求1至17中任一项所述方法的步骤。A computer program product, including a computer program/instruction, characterized in that when the computer program/instruction is executed by a processor, the steps of the method described in any one of claims 1 to 17 are implemented.
PCT/CN2022/095921 2022-03-11 2022-05-30 Projection interaction method, and device, medium and program product WO2023168836A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210241557 2022-03-11
CN202210241557.9 2022-03-11

Publications (1)

Publication Number Publication Date
WO2023168836A1 true WO2023168836A1 (en) 2023-09-14

Family

ID=83514634

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/095921 WO2023168836A1 (en) 2022-03-11 2022-05-30 Projection interaction method, and device, medium and program product

Country Status (2)

Country Link
CN (1) CN115185437A (en)
WO (1) WO2023168836A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577788A (en) * 2012-07-19 2014-02-12 华为终端有限公司 Augmented reality realizing method and augmented reality realizing device
CN111752376A (en) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 Labeling system based on image acquisition
CN111757074A (en) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 Image sharing marking system
CN113096003A (en) * 2021-04-02 2021-07-09 北京车和家信息技术有限公司 Labeling method, device, equipment and storage medium for multiple video frames
US20220078384A1 (en) * 2020-09-10 2022-03-10 Seiko Epson Corporation Information generation method, information generation system, and non- transitory computer-readable storage medium storing program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116866336A (en) * 2019-03-29 2023-10-10 亮风台(上海)信息科技有限公司 Method and equipment for performing remote assistance
CN111988493B (en) * 2019-05-21 2021-11-30 北京小米移动软件有限公司 Interaction processing method, device, equipment and storage medium
CN113741698B (en) * 2021-09-09 2023-12-15 亮风台(上海)信息科技有限公司 Method and device for determining and presenting target mark information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577788A (en) * 2012-07-19 2014-02-12 华为终端有限公司 Augmented reality realizing method and augmented reality realizing device
CN111752376A (en) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 Labeling system based on image acquisition
CN111757074A (en) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 Image sharing marking system
US20220078384A1 (en) * 2020-09-10 2022-03-10 Seiko Epson Corporation Information generation method, information generation system, and non- transitory computer-readable storage medium storing program
CN113096003A (en) * 2021-04-02 2021-07-09 北京车和家信息技术有限公司 Labeling method, device, equipment and storage medium for multiple video frames

Also Published As

Publication number Publication date
CN115185437A (en) 2022-10-14

Similar Documents

Publication Publication Date Title
US11868543B1 (en) Gesture keyboard method and apparatus
US11640235B2 (en) Additional object display method and apparatus, computer device, and storage medium
US8976135B2 (en) Proximity-aware multi-touch tabletop
WO2023035829A1 (en) Method for determining and presenting target mark information and apparatus
JP6129863B2 (en) Three-dimensional touch type input system and optical navigation method
JP2014533347A (en) How to extend the range of laser depth map
CN103391411A (en) Image processing apparatus, projection control method and program
US9632592B1 (en) Gesture recognition from depth and distortion analysis
US20170068417A1 (en) Information processing apparatus, program, information processing method, and information processing system
US11935294B2 (en) Real time object surface identification for augmented reality environments
WO2021135288A1 (en) Touch control method for display, terminal device, and storage medium
WO2020103657A1 (en) Video file playback method and apparatus, and storage medium
US9547370B2 (en) Systems and methods for enabling fine-grained user interactions for projector-camera or display-camera systems
US11853651B2 (en) Method to determine intended direction of a vocal command and target for vocal interaction
TW201425974A (en) Apparatus and method for gesture detecting
CN109656364B (en) Method and device for presenting augmented reality content on user equipment
US9304582B1 (en) Object-based color detection and correction
WO2020192175A1 (en) Three-dimensional graph labeling method and apparatus, device, and medium
TWI656359B (en) Device for mixed reality
KR20170040222A (en) Reflection-based control activation
US11057549B2 (en) Techniques for presenting video stream next to camera
WO2023168836A1 (en) Projection interaction method, and device, medium and program product
US11769293B2 (en) Camera motion estimation method for augmented reality tracking algorithm and system therefor
CN114513689A (en) Remote control method, electronic equipment and system
US20160091966A1 (en) Stereoscopic tracking status indicating method and display apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22930465

Country of ref document: EP

Kind code of ref document: A1