WO2023168836A1 - Procédé d'interaction de projection, dispositif, support et produit de programme - Google Patents

Procédé d'interaction de projection, dispositif, support et produit de programme Download PDF

Info

Publication number
WO2023168836A1
WO2023168836A1 PCT/CN2022/095921 CN2022095921W WO2023168836A1 WO 2023168836 A1 WO2023168836 A1 WO 2023168836A1 CN 2022095921 W CN2022095921 W CN 2022095921W WO 2023168836 A1 WO2023168836 A1 WO 2023168836A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target user
image information
annotation
computer
Prior art date
Application number
PCT/CN2022/095921
Other languages
English (en)
Chinese (zh)
Inventor
廖春元
方中慧
杨浩
林祥杰
Original Assignee
亮风台(上海)信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 亮风台(上海)信息科技有限公司 filed Critical 亮风台(上海)信息科技有限公司
Publication of WO2023168836A1 publication Critical patent/WO2023168836A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0489Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present application relates to the field of communications, and in particular to a technology for projection interaction.
  • Augmented Reality It is a technology that calculates the position and angle of camera images in real time and adds corresponding virtual three-dimensional model animation, video, text, pictures and other digital information.
  • the goal of this technology is to display the The virtual world is nested in the real world and interacts with it.
  • Video call technology usually refers to a communication method based on the Internet and mobile Internet that transmits human voice and images in real time between smart devices.
  • Existing computer equipment lacks other interactive capabilities except video communication.
  • One purpose of this application is to provide a projection interaction method, device, medium and program product.
  • a projection interaction method is provided, wherein the method is applied to a computer device, the computer device includes a top camera device and a projection device, and the method includes:
  • image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the annotation information is composed of the The first user operation of the target user is determined;
  • Corresponding projection position information is determined based on the annotation position information, and the annotation information is projected and presented based on the projection position information.
  • a projection interaction method wherein, applied to a target user device, the method includes:
  • the annotation information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device.
  • a computer device for projection interaction includes a top camera device and a projection device.
  • the device includes:
  • a module configured to collect corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information;
  • One or two modules configured to obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, so The annotation information is determined by the first user operation of the target user;
  • a third module configured to determine corresponding projection position information based on the annotation position information, and project and present the annotation information based on the projection position information.
  • a target user device for projection interaction wherein the device includes:
  • the two-one module is used to receive and present the top image information transmitted by the corresponding computer equipment and collected by the corresponding top camera device;
  • the second module is used to obtain the first user operation of the target user device corresponding to the target user, and generate annotation information about the top image information based on the first user operation;
  • the second and third modules are used to return the annotation information to the computer device, so that the computer device can present the annotation information through a corresponding projection device.
  • a computer device wherein the device includes:
  • a memory arranged to store computer-executable instructions which, when executed, cause the processor to perform the steps of any of the methods described above.
  • a computer-readable storage medium on which a computer program/instruction is stored, characterized in that, when executed, the computer program/instruction causes the system to perform any one of the methods described above. step.
  • a computer program product including a computer program/instruction, characterized in that when the computer program/instruction is executed by a processor, the steps of any of the above methods are implemented.
  • this application projects and presents the image annotation information of the target user on the computer device through interaction between the two parties, which can provide the current user with interesting interactions while providing a more realistic and natural augmented reality interaction for the current user. In particular, it can enhance the interaction and participation experience of parent-child companionship for parents who are not with their children.
  • Figure 1 shows a system topology diagram of projection interaction according to an embodiment of the present application
  • Figure 2 shows a method flow chart of a projection interaction method according to an embodiment of the present application
  • Figure 3 shows a method flow chart of a projection interaction method according to an embodiment of the present application
  • Figure 4 shows a functional module of a computer device according to an embodiment of the present application
  • Figure 5 shows a functional module of a target user equipment according to an embodiment of the present application
  • Figure 6 illustrates an example system that may be used to implement various embodiments described in this application.
  • the terminal, the device of the service network and the trusted party all include one or more processors (for example, central processing unit (Central Processing Unit, CPU)), input/output interfaces, network interfaces and Memory.
  • Memory may include non-permanent memory in computer-readable media, random access memory (Random Access Memory, RAM) and/or non-volatile memory, such as read-only memory (Read Only Memory, ROM) or flash memory ( Flash Memory). Memory is an example of computer-readable media.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • Flash Memory Flash Memory
  • Computer-readable media includes both persistent and non-volatile, removable and non-removable media that can be implemented by any method or technology for storage of information.
  • Information may be computer-readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (Static Random-Access Memory, SRAM), Dynamic Random Access Memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (Electrically-Erasable Programmable Read) -Only Memory (EEPROM), flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage , magnetic tape cassettes, magnetic tape disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PCM Phase-Change Memory
  • the equipment referred to in this application includes but is not limited to user equipment, network equipment, or equipment composed of user equipment and network equipment integrated through a network.
  • the user equipment includes but is not limited to any kind of mobile electronic product that can interact with the user, such as smart phones, tablet computers, smart desk lamps, etc.
  • the mobile electronic product can use any operating system, such as Android operating system, iOS operating system, etc.
  • the network device includes an electronic device that can automatically perform numerical calculations and information processing according to preset or stored instructions, and its hardware includes but is not limited to microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASIC ), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), Digital Signal Processor (Digital Signal Processor, DSP), embedded devices, etc.
  • the network equipment includes but is not limited to a computer, a network host, a single network server, multiple network server sets, or a cloud composed of multiple servers; here, the cloud is composed of a large number of computers or network servers based on cloud computing (Cloud Computing), Among them, cloud computing is a type of distributed computing, a virtual supercomputer composed of a group of loosely coupled computer sets.
  • the network includes but is not limited to the Internet, wide area network, metropolitan area network, local area network, VPN network, wireless self-organizing network (Ad Hoc network), etc.
  • the device may also be a program running on the user equipment, network equipment, or a device formed by integrating the user equipment with the network equipment, the network equipment, the touch terminal, or the network equipment and the touch terminal through a network.
  • Figure 1 shows a typical scenario of this application.
  • the computer device 100 establishes a communication connection with the target user device 200.
  • the computer device 100 transmits the corresponding top image information to the target user device 200; the target user device 200 receives the top image information.
  • the corresponding annotation information or image annotation information is determined based on the top image information, and then the annotation information or image annotation information is returned to the computer device 100 .
  • the computer device 100 includes but is not limited to any electronic product that can perform human-computer interaction with the user, such as smart phones, tablet computers, smart desk lamps, smart projection devices, etc.
  • the computer equipment includes a top camera device for collecting images from above (for example, directly above or diagonally above, etc.) the operating object of the current user of the computer equipment (for example, the book that the user is currently reading or the working parts that the user is operating, etc.) Image information related to the operating object, such as camera or depth camera, etc.
  • the target user equipment includes but is not limited to any mobile electronic product that can perform human-computer interaction with the user, such as smart phones, tablet computers, personal computers, etc.
  • the target user equipment includes a display device for presenting the top Image information, for example, a liquid crystal display screen or a projector, etc.; the target user equipment also includes an input device for collecting the user's annotation information or image annotation information about the top image information.
  • the annotation information includes but is not limited to information about the top image. Mark information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks of interactive objects in the interactive object.
  • the corresponding image annotation information includes the above annotation information and the corresponding annotation position information.
  • the annotation information or the interactive object of the annotation information is at the image coordinates The image coordinate information in the system, etc., where the marked position information is only an example and is not limited here.
  • the data transmission between the target user equipment and the computer equipment in this application may be based on a direct communication connection between the two devices, or may be forwarded via a corresponding server, etc.
  • FIG. 2 shows a projection interaction method according to one aspect of the present application.
  • the method is applied to the computer device 100 and can be applied to the system topology shown in Figure 1.
  • the method includes step S101, step S102 and step S103.
  • step S101 the corresponding top image information is collected through the top camera device, and the top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information;
  • step S102 Obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the annotation information is composed of the
  • the first user operation of the target user is determined; in step S103, corresponding projection position information is determined based on the annotation position information, and the annotation information is projected and presented based on the projection position information.
  • the corresponding top image information is collected through the top camera device, and the top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information.
  • the current user for example, user A
  • the target user for example, user B
  • the computer equipment includes a top camera device, which is used to collect image information related to the operating object of the current user.
  • the top camera device collects image information related to the operating object from above the operating object (for example, directly above or diagonally).
  • the top camera device is disposed directly above the operation object, and the optical axis of the top camera device can pass through the shape or center of the operation object, etc.
  • the computer equipment is usually provided with a forward extension rod, and the top camera device is placed directly below on the extension rod to collect the image information below. Under the extension rod, the computer equipment There is an operation area corresponding to the operation object between the user and the operation area.
  • the operation area can be an extension area below the computer device itself (for example, the top image information can be obtained more accurately through the extension area of the computer device itself) , or by setting a blank area between the computer device and the user as the corresponding operation area (for example, setting a blank desktop as the operation area, etc.).
  • the base of the computer equipment remains horizontal to maintain the stability of the computer equipment.
  • the optical axis corresponding to the top camera device is vertically downward, and the surface where the operating area is located remains horizontal, thereby allowing the computer equipment to capture
  • the distances between each area of the operation object in the top image information relative to the top camera device are similar.
  • the computer device collects top image information about the operating object through the top camera device, and sends the top image information directly or via a network device to the target user device.
  • the computer device may first identify the top image information (for example, identify and track preset template features through corresponding template features), etc., thereby ensuring that the corresponding operation object exists in the top image information. If there is no operation object in the current top image information, the computer device can adjust the camera angle of the top camera device and collect images from other areas to ensure that there is an operation object in the top image information, for example, by adjusting the extension angle and height of the extension rod. Or directly adjust the camera pose information of the top camera device, thereby achieving the effect of changing the camera angle of the top camera device.
  • the computer device displays corresponding prompt information to remind the current user that there is no operation object in the current operation area. If there is an operation object in the current top image information, the top image information is transmitted to the target user device. After receiving the top image information, the target user equipment presents the top image information, for example, displays the top image information through a display screen or displays the top image information through projection through a projector.
  • the target user device receives and presents the top image information for the target user to perform operations and interactions based on the top image information. While presenting the top image information, the target user device can also collect the target user's annotation information about the interactive object in the top image information through the input device.
  • the interactive position of the interactive object may be predetermined, or may be determined and determined by the current user.
  • the transmission to the target user device may also be determined based on the target user's operation position (for example, touch position, cursor position, gesture recognition result or voice recognition result, etc.).
  • the target user device directly transmits the annotation information to the computer device; if the interaction position in the top image information is determined by the target user's operation position, then The target user device determines the interaction position as the annotation position information, combines the annotation information to generate corresponding image annotation information, and returns the image annotation information to the computer device or the like.
  • step S102 obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the The annotation information is determined by the first user operation of the target user.
  • the top image information includes an image coordinate system established based on the top image information.
  • the image coordinate system uses a certain pixel point (for example, the pixel point in the upper left corner of the image, etc.) as the origin, and uses the horizontal axis as the X-axis, The vertical axis is used as the Y axis to establish a corresponding image/pixel coordinate system, etc.
  • the corresponding annotation position information includes the coordinate position information of the corresponding annotation information or the interactive object of the annotation information in the image coordinate system.
  • the coordinate position information can be used to indicate the annotation information or Mark the center position of the interactive object of the information or the coordinate set of the area range, etc.
  • the annotation location information may be determined based on user operations of the current user and the target user, or may be preset.
  • the corresponding annotation information is determined by the target user device based on the first user operation of the target user on the top image. For example, the corresponding annotation is determined based on the mark information added by the target user on mouse input, keyboard input, touch screen input, gesture input, or voice input operation.
  • the annotation position information is determined based on the image coordinate information in the top image information based on the corresponding mouse click position, keyboard input corresponding position, touch screen touch position, gesture recognition, and voice recognition results, or based on the mouse click
  • the position, corresponding position of keyboard input, touch position of the touch screen, gesture recognition, and voice recognition results are first determined in the top image information, and then the corresponding image annotation position is determined based on the interactive object.
  • the annotation information includes but is not limited to tag information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks about interactive objects in the top image information.
  • tag information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks about interactive objects in the top image information.
  • the corresponding image There are also differences in the presentation of location information.
  • corresponding projection position information is determined based on the annotation position information, and the annotation information is projected and presented based on the projection position information.
  • the projection device is placed near the top camera device, such as the corresponding projector and the top camera are placed on the top extension rod, etc.
  • the mapping relationship between the corresponding projection device and the top camera device can be obtained through calculation.
  • s1 use the projection device to project a calibration image containing a specific pattern (such as a checkerboard) onto the operating area;
  • s2 the top camera collects images containing the operating area.
  • Video image s3: Use the image collected in step s2 to identify the coordinate information of each pattern in the display screen; s4: Establish the relationship between the pattern in the original calibration image projected by s1 and the pattern coordinates of the video image containing the operation area collected in s2 Correspondence; s5: Estimate camera internal and external parameters, as well as distortion parameters based on the two types of coordinates in s4; s6: Use s5 to obtain parameters to achieve mapping between the two images.
  • the above-mentioned calculation method of the mapping relationship between the projection device and the top camera device is only an example, and other existing or future methods for calculating the mapping relationship between the projection device and the top camera device may be applicable.
  • the annotation position information can be converted to projection position information, such as the projection coordinate information of the annotation information in the coordinate system of the projection image corresponding to the projection image, where the projection position information is only an example, No limitation is made here.
  • the computer device projects the annotation information to the corresponding operation area according to the corresponding projection coordinate information, thereby presenting the annotation information in the corresponding area in the operation object.
  • the computer device further includes a display device, and the method further includes step S104 (not shown).
  • step S104 the target image information transmitted by the target user device is received, and the display device The target image information is presented.
  • the computer device further includes a display device, which is used to present image information stored or received by the computer device, such as a liquid crystal display screen.
  • the display device is placed directly in front of or near the computer device facing the current user.
  • the target user equipment is equipped with a corresponding camera device for collecting target image information on the target user equipment side.
  • the target image The information includes image information about the target user, the target user device can transmit the target image information to the computer device, and the computer device receives and presents the target image information through the display device.
  • the target image information and the corresponding top image information are included in the video stream collected in real time, then both the computer device and the target user equipment can present the top image information, the real-time video stream corresponding to the target image information, etc.
  • the corresponding video stream not only includes images collected by the camera device, but also includes voice information collected by the voice input device; while the corresponding computer equipment and target user equipment display the real-time video stream corresponding to the target image information and the top image information, they also pass
  • the voice output device plays corresponding voice information, etc., that is, the target user equipment and the computer equipment conduct audio and video communication.
  • the computer equipment further includes a front-facing camera device, wherein the front-facing camera device is used to collect front-facing image information about the current user of the computer device; wherein the method further includes the steps S105 (not shown), in step S105, the preamble image information is transmitted to the target user equipment, so that the target user equipment can present the preamble image information.
  • the computer device further includes a front-facing camera device for collecting image information related to the current user of the computer device holder. The front-facing camera device is placed on the side of the computer device facing the current user. For example, it can be disposed on the display above the device.
  • the front-facing camera device is mainly used to collect image information related to the current user's head, and is used to realize video interactive communication between the current user and the target user.
  • the corresponding front-facing camera device collects front-facing image information about the current user when enabled, and transmits the front-facing image information to the target user device for display by the display device of the target user device.
  • the front-facing image information The information is included in the video stream collected in real time, then the target user equipment can present the real-time video stream corresponding to the front image information through the corresponding display device.
  • the enabled state of the front camera device may be enabled based on a video creation request from the target user equipment or computer device, or switched from the enabled state of the top camera device, or based on an independent enable control of the front camera device. Turn on when triggered.
  • the method further includes step S106 (not shown).
  • step S106 a camera switching request regarding the current video interaction between the computer device and the target user device is obtained, wherein the current The image information of the video interaction includes the front image information; wherein, in step S101, in response to the camera switching request, the front camera device is turned off and the top camera device is enabled, and the top camera device collects data through the top camera device.
  • the corresponding top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information. For example, only one of the front camera device and the top camera device of the computer equipment is enabled at the same time, thereby reducing the bandwidth pressure of video interaction and ensuring the efficiency and orderliness of the video interaction process.
  • the computer device is provided with a corresponding camera switching control.
  • the camera switching control may be a physical button provided on the computer device or a virtual control presented on the current screen.
  • the camera switching control is used to Realize switching from the enabled state of the front camera device to the enabled state of the top camera device.
  • the camera switching control is also used to adjust the computer device from the enabled state of the top camera device to the enabled state of the front camera device, In other words, the camera switching control is used to switch the activation state of the top camera device and the front camera device; in other cases, the camera switching control is only used to switch from the activation state of the front camera device to the top camera device.
  • the computer device is also provided with a corresponding camera restore control, and the camera restore control is used to adjust the computer device from the enabled state of the top camera device to the enabled state of the front camera device.
  • the computer device determines the camera switching request by recognizing the current user's interactive input operations such as gestures, voice, and head movements.
  • the target user device may also be provided with a corresponding camera switching control, or the target user device determines the camera switching request by recognizing the target user's gestures, voice, head movements and other interactive input operations. , which will not be described in detail here.
  • the computer device is in a state of collecting front image information during the video interaction process.
  • the computer device When the user (for example, the current user or the target user) touches the camera switching control, the computer device turns off the front camera device and enables the top camera.
  • a device that collects corresponding top image information through the top camera device, and transmits the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information.
  • the target user can perform a first user operation on the top image information to determine the icon annotation information of the top image information.
  • the computer device determines the projection position information based on the annotation position information, and projects and presents the annotation information based on the projection position information.
  • the target Users can provide intuitive guidance on the current user's operation objects.
  • the activation status of different camera devices corresponds to different interaction modes. For example, when the computer device currently only turns on the front camera device, the computer device is in the video communication mode, which is used to achieve video communication between the current user and the target user, etc. ; When the computer device currently only turns on the top camera device, the computer device is in the video guidance mode, which is used to achieve communication guidance for the target user about the current user's operation object; when the computer device currently turns on both the front camera device and the top camera device, the computer device The device not only enables the target user to communicate and guide the current user's operation object, but also facilitates video communication between the target user and the current user.
  • step S105 receive a camera switching request transmitted by the target user equipment regarding the current video interaction between the computer device and the target user equipment, wherein the camera switching request is based on the target user equipment.
  • the second user operation is determined.
  • the target user device may generate a video image of the current video interaction based on the target user's second user operation (for example, a trigger operation on the camera switching control, or an interactive input operation instruction on the camera switch, etc.).
  • Switching request the target user equipment transmits the camera switching request to the corresponding computer device; or, the target user equipment sends the second user operation to the corresponding server, and the server generates the corresponding camera switching request based on the second user operation, and sends the camera switching request to the corresponding computer device.
  • Handover requests are sent to computer equipment, etc.
  • the first user operation and the second user operation are only used to distinguish the corresponding functions of the user operations, and do not involve the correlation of the order, size and other operations.
  • the target user equipment obtains and presents the front image information of the front camera device
  • the corresponding camera switching control is presented in the current screen
  • the target user equipment can collect the second user's information by Operation, such as the target user's touch operation on the camera switching control, determines the camera switching request.
  • the target user device can collect the second user operation, such as the target user's gesture, voice, head movement, etc.
  • the camera switching request usually occurs after the video is established.
  • the corresponding front-facing camera device can be enabled simultaneously based on the video creation request, or the front-facing camera device can be enabled based on the interaction between the two parties after the target user device and the computer device establish communication.
  • the camera device, etc. can also be switched from the enabled state of the top camera device to the enabled state of the front camera device after the target user equipment establishes communication with the computer device and activates the top camera device.
  • step S105 obtain a video establishment request regarding the current video interaction; in response to the video establishment request, establish all connections between the computer device and the target user device based on the video establishment request.
  • the corresponding front image information is collected through the front camera device and transmitted to the target user equipment, so that the target user equipment can present the front image information.
  • the current video interaction may be initiated based on a video creation request determined by the user operation of the current user or the target user.
  • the video stream is transmitted between the target user device and the computer device.
  • the computer device transmits a video stream to the computer device.
  • the target user device transmits the corresponding pre-image information, and the target user device transmits the corresponding target image information to the computer device, or only the computer device transmits the pre-image information to the target user device alone, etc.
  • the video creation request may be determined based on the current user's initiated operation on the computer device (for example, a triggering operation on a video creation control, or an interactive input operation instruction on video creation, etc.), and the video creation request includes a user identification corresponding to the target user.
  • user identification information includes but is not limited to unique tag information used to identify the target user, specifically, such as name, image, ID card, mobile phone number, application serial number or device access control address information, etc.; the computer device can The video creation request is sent to the network device for the network device to forward to the target user device and establish video interaction between the two, or the computer device directly sends the video creation request to the target user device and establishes video interaction between the two.
  • the video creation request may be determined based on the target user's initiating operation on the target user device (for example, a triggering operation on the video creation control, or an interactive input operation instruction on the video creation, etc.), and the video creation request includes information corresponding to the current User identification information of the user; the target user device can send the video creation request to the network device for the network device to forward to the computer device and establish video interaction between the two, or the target user device can directly send the video creation request to the computer device And establish video interaction between the two.
  • the target user device can send the video creation request to the network device for the network device to forward to the computer device and establish video interaction between the two, or the target user device can directly send the video creation request to the computer device And establish video interaction between the two.
  • the method further includes step S107 (not shown).
  • step S107 obtain a camera restoration request regarding the current video interaction between the computer device and the target user device; respond to the camera restoration request , turn off the top camera device and enable the front camera device, collect corresponding front image information through the front camera device, and transmit the front image information to the corresponding target user equipment.
  • the camera restoration request is used to adjust the computer device from the enabled state of the top camera device to the enabled state of the front camera device.
  • the corresponding top camera device After the restored state, the corresponding top camera device is in a closed state, and only the front image information is collected and Transmission, such as the computer device transmitting the corresponding pre-image information to the target user device, the target user device transmitting the corresponding target image information to the computer device, or only the computer device transmitting the pre-image information to the target user device alone, etc.
  • the camera restoration request is initiated based on the current user/target user's touch operation on the camera restoration control. In other embodiments, the camera restoration request is based on the current user/target user's touch operation on the camera restoration.
  • the computer device Initiated by interactively inputting operation instructions (such as gestures, voice, head movements, etc.), based on the camera restoration request, the computer device turns off the top camera device that is currently enabled for video interaction, and enables the corresponding front camera device, thereby achieving the goal of returning the camera to the camera.
  • operation instructions such as gestures, voice, head movements, etc.
  • the method further includes step S108 (not shown).
  • step S108 a camera start request regarding video interaction between the computer device and the target user device is obtained; wherein, in step S101, in response to In response to the camera startup request, the corresponding top image information is collected through the top camera device, and the top image information is transmitted to the corresponding target user equipment; in step S105, in response to the camera startup request, through the The front camera device collects corresponding front image information and transmits the front image information to the target user equipment.
  • the camera start request is used to enable the front camera device and the top camera device at the same time.
  • the camera start request can be included in the video creation request to enable the front camera device and the top camera device at the same time during the video creation process.
  • the closing controls for the front camera device and the top camera device for the current user and/or the target user to close the front camera device or the top camera device are closed at the same time, device, turns off video interaction between the computer device and the target user device.
  • the camera start request may also be to call the front camera device or the top camera during the video interaction process (during the video interaction process, one of the front camera device and the top camera device is enabled). device to achieve the effect of two camera devices being enabled at the same time.
  • the camera startup request may be generated based on the touch operation of the current user or the target user on opening the control.
  • the camera startup request may be generated based on the current user or the target user's touch operation on the camera.
  • the interactive input operation instructions (such as gestures, voice, head movements, etc.) to start are generated, and the two camera devices are enabled simultaneously based on the response of the computer device or the target user device to the camera start request.
  • the computer device includes a lighting device; wherein the method further includes step S109 (not shown).
  • step S109 if an enabling request for the lighting device is obtained, turning on the lighting device.
  • the computer equipment may include a lighting device for adjusting the brightness of the operating area.
  • the activation request is used to turn on the corresponding lighting device and project a certain intensity of light on the operating area to change the brightness of the environment.
  • the activation request can be generated by the computer device based on the current user's operation, or it can also be based on the target user's operation corresponding to the target user's equipment. (such as touch controls on lighting controls, etc.) are generated and transmitted to computer equipment, etc.
  • the enable request is included in the corresponding video creation request, which is used to turn on the lighting device while the video interaction is being established; in other cases, the enable request is based on the target user/current user during the video interaction. Or the user operation is determined when there is no video interaction, thereby turning on the lighting device to adjust the ambient brightness.
  • the computer device further includes an ambient light detection device
  • the method further includes step S110 (not shown).
  • step S110 the illumination intensity information of the current environment is obtained based on the ambient light detection device, Detect whether the light intensity information meets the preset lighting threshold; if not, adjust the lighting device until the light intensity information meets the preset lighting threshold.
  • the lighting device of the computer equipment includes a lighting device with adjustable brightness.
  • the lighting brightness of the lighting device can be adjusted based on the touch selection operation of the brightness adjustment control by the current user or the target user.
  • the lighting brightness of the lighting device can be adjusted based on the current user's or target user's touch selection operation on the brightness adjustment control.
  • the user or the target user interactively inputs operation instructions (such as gestures, voice, head movements, etc.) regarding brightness adjustment to adjust the lighting brightness of the lighting device.
  • the computer equipment includes an ambient light detection device, which is used to cooperate with the lighting device to realize automatic adjustment of lighting and ensure the controllability and applicability of ambient brightness.
  • the computer equipment measures the light intensity information of the current environment based on the ambient light detection device, and compares the light intensity information with a preset lighting threshold.
  • the preset lighting threshold can be a specific light intensity value or multiple light intensities. The range of numerical values, etc., is not limited here.
  • the illumination intensity information is the same as the illumination threshold information or the intensity difference is less than the preset difference threshold, etc., then it is determined that the illumination intensity information satisfies the preset illumination threshold; or, if the illumination intensity information is within the illumination threshold interval, then It is determined that the light intensity information satisfies a preset lighting threshold, etc. If not, the corresponding lighting adjustment information is calculated based on the lighting intensity information and the lighting threshold information, and the corresponding lighting device is adjusted based on the lighting adjustment information.
  • the lighting adjustment information includes an increasing or decreasing lighting adjustment value corresponding to the current lighting intensity information. wait.
  • the method further includes step S111 (not shown).
  • step S111 determine the image brightness information corresponding to the top image information based on the top image information, and detect whether the image brightness information meets a predetermined Set a brightness threshold; if it is not met, adjust the lighting device until the image brightness information meets the preset brightness threshold.
  • the computer device can also determine the corresponding image brightness information based on the top image information, for example, calculate the average brightness information of the current image information based on the pixel brightness information of part (for example, sampling some pixels, etc.) or all pixels in the top image information. The image brightness information is compared with a preset brightness threshold.
  • the preset brightness threshold can be a specific image brightness value, or an interval composed of multiple image brightness values, etc., which is not limited here. If the image brightness information is the same as the brightness threshold information or the brightness difference is less than the preset difference threshold, etc., then it is determined that the image brightness information satisfies the preset brightness threshold; or if the image brightness information is within the brightness threshold interval, then It is determined that the image brightness information satisfies a preset brightness threshold, etc. If not, the corresponding lighting adjustment information is calculated based on the image brightness information and the brightness threshold information, and the corresponding lighting device is adjusted based on the lighting adjustment information.
  • the lighting adjustment information includes the lighting adjustment value corresponding to the increase or decrease of the current lighting intensity information. wait.
  • the computer device can also adjust the brightness of the lighting device based on the image brightness information of a specific area of the top image information.
  • the specific area can be an interaction area determined based on the interactive object (for example, a boundary area, a circumscribed rectangular area etc.) or determined from the top image information based on user operations of the target user/current user.
  • the method further includes step S112 (not shown).
  • step S112 the interactive position information of the current user with respect to the top image information is obtained; and the corresponding virtual position information is determined based on the interactive position information.
  • the interactive position information about the interactive object in the top image information can be obtained based on the user operation of the current user.
  • the current user presents the corresponding top image information on the display device, and based on the user's frame selection , click or touch operations, determine one or more pixel positions or pixel areas from the top image information, and use the one or more pixel positions or pixel areas as corresponding interactive position information.
  • the interactive position information is included in the top image information. Coordinate position information in the image coordinate system, etc. For another example, based on the current user's user operation on the operation object (for example, pointing a finger or pen tip to a certain position, etc.), the computer device determines the pointed position of the finger or pen through image recognition technology, and uses the pointed position as the corresponding interactive position information. .
  • the computer device can directly present the corresponding virtual presentation information based on the interactive position information. For example, the computer device matches the virtual information corresponding to the interactive position information from the database, or for example, by performing target recognition on the interactive object corresponding to the interactive position information, thereby Match the corresponding virtual information in the database, determine the corresponding virtual information as virtual presentation information, and determine the projection position information of the virtual presentation information through the interactive position information, so as to project the virtual presentation information to the interactive object through the projection device Spatial location, etc.
  • the computer device includes an infrared measurement device; wherein, obtaining the interactive position information of the current user with respect to the top image information includes: determining the top position through the infrared measurement device. Interactive position information of image information.
  • the computer equipment also includes an infrared measurement device.
  • the infrared measurement device includes an infrared camera and an infrared emitter.
  • the infrared camera is installed on the top extension pole together with the top camera, and the infrared emitter is installed on On the base of the computer equipment, the infrared emitter forms an invisible light film on the surface of the operating object that is higher than a certain distance threshold on the surface.
  • the light is reflected to the infrared camera, and then passes through the photoelectric position. Accurate calculation to obtain the position of the finger or any opaque object touching the operating object.
  • the infrared measurement device includes an infrared camera and an infrared pen.
  • the infrared camera can determine the position where the infrared pen contacts the operating object. Determine the corresponding interactive position information based on the position of the finger, any opaque object or the infrared pen touching the operating object.
  • the interaction location information is sent to the target user equipment, and the annotation information transmitted by the target user equipment and returned based on the interaction location information is received, thereby determining the top Image annotation information of image information, wherein the image annotation information includes the annotation information and the interactive position information, and the annotation information is determined by the first user operation of the target user.
  • the interactive location information is used to prompt the target user to the area where the interactive object is currently located in the top image information.
  • the target user device receives the top image information transmitted by the computer device and also receives the interactive location information, and collects the target user's information about the interactive location information.
  • the interactive position information corresponds to the user operation of the interactive object, so that the corresponding annotation information is determined based on the first user operation.
  • the target user device directly returns the annotation information to the computer device, and the computer device implements projection and presentation of the annotation information based on the received annotation information and combined with the previously determined interactive position information.
  • the computer device further includes a distance measurement device; wherein the method further includes step S113 (not shown).
  • step S113 it is determined by the distance measurement device that the distance between the current user and the Distance information between computer devices. If the distance information does not meet a preset distance threshold, the computer device sends a notification.
  • the distance measurement device is disposed on a side of the computer device parallel to the corresponding operating area and facing the current user, and is used to measure real-time distance information between the computer device and the current user, such as a laser rangefinder.
  • the computer device is provided with a corresponding distance threshold interval.
  • the computer device When the distance information between the computer device and the current user is within the distance threshold interval, it is determined that the distance information satisfies the preset distance threshold and the current user's posture meets the requirements. If the distance information between the computer device and the current user is outside the distance threshold interval, it is determined that the distance information does not meet the preset distance threshold, and the computer device issues a corresponding prompt notification, such as reminding the current user through sound, image, vibration, text, etc. The posture needs to be adjusted to ensure that the corresponding distance information meets the preset distance threshold.
  • Figure 3 shows a projection interaction method according to one aspect of the present application, wherein the method is applicable to the system shown in Figure 1 and applied to the target user equipment 200, and mainly includes steps S201, S202 and S203.
  • step S201 receive and present the top image information transmitted by the corresponding computer device and collected by the corresponding top camera device; in step S202, obtain the first user operation of the target user corresponding to the target user equipment, and based on the first user operation Generate annotation information about the top image information; in step S203, return the annotation information to the computer device for the computer device to present the annotation information through a corresponding projection device.
  • the current user for example, user A
  • the computer device through which the computer device can communicate with the target user device held by the target user (for example, user B), such as establishing a connection between the computer device and the target through a wired or wireless manner.
  • the computer equipment includes a top camera device, which is used to collect image information related to the operating object of the current user.
  • the top camera device collects image information related to the operating object from above the operating object (for example, directly above or diagonally). above, etc.), in some cases, the top camera device is disposed directly above the operation object, and the optical axis of the top camera device can pass through the shape or center of the operation object, etc.
  • the computer equipment is usually provided with a forward extension rod, and the top camera device is placed directly below on the extension rod to collect the image information below.
  • the computer equipment There is an operation area corresponding to the operation object between the user and the operation area.
  • the operation area can be an extension area below the computer device itself (for example, the top image information can be obtained more accurately through the extension area of the computer device itself) , or by setting a blank area between the computer device and the user as the corresponding operation area (for example, setting a blank desktop as the operation area, etc.).
  • the base of the computer equipment remains horizontal to maintain the stability of the computer equipment.
  • the optical axis corresponding to the top camera device is vertically downward, and the surface where the operating area is located remains horizontal, thereby allowing the computer equipment to capture
  • the distances between each area of the operation object in the top image information relative to the top camera device are similar.
  • the computer device collects top image information about the operating object through the top camera device, and sends the top image information directly or via a network device to the target user device.
  • the computer device may first identify the top image information (for example, identify and track preset template features through corresponding template features), etc., thereby ensuring that the corresponding operation object exists in the top image information. If there is no operation object in the current top image information, the computer device can adjust the camera angle of the top camera device and collect images from other areas to ensure that there is an operation object in the top image information, for example, by adjusting the extension angle and height of the extension rod. Or directly adjust the camera pose information of the top camera device, thereby achieving the effect of changing the camera angle of the top camera device.
  • the computer device displays corresponding prompt information to remind the current user that there is no operation object in the current operation area. If there is an operation object in the current top image information, the top image information is transmitted to the target user device. After receiving the top image information, the target user equipment presents the top image information, for example, displays the top image information through a display screen or displays the top image information through projection through a projector.
  • the target user device receives and presents the top image information for the target user to perform operations and interactions based on the top image information. While presenting the top image information, the target user device can also collect the target user's annotation information about the interactive object in the top image information through the input device.
  • the interactive position of the interactive object may be predetermined, or may be determined and determined by the current user.
  • the transmission to the target user device may also be determined based on the target user's operation position (for example, touch position, cursor position, gesture recognition result or voice recognition result, etc.).
  • the target user device directly transmits the annotation information to the computer device; if the interaction position in the top image information is determined by the target user's operation position, then The target user device determines the interaction position as the annotation position information, combines the annotation information to generate corresponding image annotation information, and returns the image annotation information to the computer device or the like.
  • the method further includes step S204 (not shown).
  • step S204 corresponding annotation location information is determined based on the first user operation; wherein, in step S203, the annotation information is and the annotation location information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device based on the annotation location information.
  • the top image information includes an image coordinate system established based on the top image information.
  • the image coordinate system uses a certain pixel point (for example, the pixel point in the upper left corner of the image, etc.) as the origin, and uses the horizontal axis as the X-axis, The vertical axis is used as the Y axis to establish a corresponding image/pixel coordinate system, etc.
  • the corresponding annotation position information includes the coordinate position information of the corresponding annotation information or the interactive object of the annotation information in the image coordinate system.
  • the coordinate position information can be used to indicate the annotation information or Mark the center position of the interactive object of the information or the coordinate set of the area range, etc.
  • the annotation location information may be determined based on user operations of the current user and the target user, or may be preset.
  • the corresponding annotation information is determined by the target user device based on the first user operation of the target user on the top image. For example, the corresponding annotation is determined based on the mark information added by the target user on mouse input, keyboard input, touch screen input, gesture input, or voice input operation. Information; in some cases, the annotation position information is determined based on the image coordinate information in the top image information based on the corresponding mouse click position, keyboard input corresponding position, touch screen touch position, gesture recognition, and voice recognition results, or based on the mouse click The position, corresponding position of keyboard input, touch position of the touch screen, gesture recognition, and voice recognition results are first determined in the top image information, and then the corresponding image annotation position is determined based on the interactive object.
  • the annotation information includes but is not limited to tag information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks about interactive objects in the top image information.
  • tag information such as stickers, text, graphics, videos, graffiti, 2D marks or 3D marks about interactive objects in the top image information.
  • the corresponding image There are also differences in the presentation of location information.
  • the projection device is placed near the top camera device, for example, the corresponding projector and the top camera are placed on the top extension rod.
  • the mapping relationship between the corresponding projection device and the top camera device can be obtained through calculation.
  • the annotation position information can be converted to projection position information.
  • the annotation information is in the coordinate system of the projection image corresponding to the projection image.
  • the projection coordinate information in , where the projection position information is only an example and is not limited here.
  • the computer device projects the annotation information to the corresponding operation area according to the corresponding projection coordinate information, thereby presenting the annotation information in the corresponding area in the operation object.
  • Figure 4 shows a projection interactive computer device 100 according to one aspect of the present application.
  • the device includes a first module 101, a second module 102 and a third module 103.
  • Module 101 configured to collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment, so that the target user equipment can present the top image information;
  • 1. 2 modules 102. Used to obtain image annotation information about the top image information of the target user device corresponding to the target user, wherein the image annotation information includes corresponding annotation information and annotation position information of the annotation information, and the annotation The information is determined by the first user operation of the target user;
  • a third module 103 is configured to determine corresponding projection position information based on the annotation position information, and project and present the annotation information based on the projection position information.
  • the computer device further includes a display device, and the device further includes a four-module (not shown) for receiving the target image information transmitted by the target user device, and presenting the information through the display device. Describe the target image information.
  • the computer device further includes a front-facing camera device, wherein the front-facing camera device is used to collect front-facing image information about the current user of the computer device; wherein the device further includes a Five modules (not shown), configured to transmit the pre-image information to the target user equipment so that the target user equipment can present the pre-image information.
  • a front-facing camera device wherein the front-facing camera device is used to collect front-facing image information about the current user of the computer device
  • the device further includes a Five modules (not shown), configured to transmit the pre-image information to the target user equipment so that the target user equipment can present the pre-image information.
  • the device further includes a module (not shown) for obtaining a camera switching request regarding a current video interaction between the computer device and the target user device, wherein the current video interaction
  • the image information includes the front image information; wherein, a module 101 is used to respond to the camera switching request, turn off the front camera device and enable the top camera device, and collect data through the top camera device
  • the corresponding top image information is transmitted to the corresponding target user equipment, so that the target user equipment can present the top image information.
  • a module is configured to receive a camera switching request transmitted by the target user equipment regarding the current video interaction between the computer device and the target user equipment, wherein the camera switching request is based on the target user equipment. The user's second user operation is determined.
  • a module is configured to obtain a video establishment request regarding the current video interaction; in response to the video establishment request, establish a connection between the computer device and the target user equipment based on the video establishment request.
  • the corresponding front image information is collected through the front camera device and transmitted to the target user equipment, so that the target user equipment can present the front image information.
  • the device further includes a module (not shown) for obtaining a camera restoration request regarding the current video interaction between the computer device and the target user device; in response to the camera restore request, close The top camera images and activates the front camera device, collects corresponding front image information through the front camera device, and transmits the front image information to the corresponding target user equipment.
  • the device further includes a module (not shown) for obtaining a camera start request regarding the video interaction between the computer device and the target user device; wherein, a module 101 is used to respond In response to the camera start request, collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment; a fifth module, configured to respond to the camera start request, through the The front camera device collects corresponding front image information and transmits the front image information to the target user equipment.
  • a module 101 is used to respond In response to the camera start request, collect the corresponding top image information through the top camera device, and transmit the top image information to the corresponding target user equipment
  • a fifth module configured to respond to the camera start request, through the The front camera device collects corresponding front image information and transmits the front image information to the target user equipment.
  • the computer device includes a lighting device; wherein the device further includes a module (not shown) configured to turn on the lighting device if an enabling request for the lighting device is obtained.
  • the computer device further includes an ambient light detection device, and the device further includes a module (not shown) for obtaining light intensity information of the current environment based on the ambient light detection device, and detecting the Whether the illumination intensity information satisfies the preset illumination threshold; if not, adjust the lighting device until the illumination intensity information satisfies the preset illumination threshold.
  • the device further includes an eleven module (not shown) for determining image brightness information corresponding to the top image information based on the top image information, and detecting whether the image brightness information satisfies a preset The brightness threshold; if it is not met, adjust the lighting device until the image brightness information meets the preset brightness threshold.
  • an eleven module (not shown) for determining image brightness information corresponding to the top image information based on the top image information, and detecting whether the image brightness information satisfies a preset The brightness threshold; if it is not met, adjust the lighting device until the image brightness information meets the preset brightness threshold.
  • the device further includes a module (not shown) for obtaining the interactive position information of the current user regarding the top image information; and determining the corresponding virtual presentation based on the interactive position information. information, and the virtual presentation information is projected and presented through the projection device.
  • the computer device includes an infrared measurement device; wherein the obtaining the interactive position information of the current user with respect to the top image information includes: determining the position of the top image information through the infrared measurement device. Interactive location information.
  • the first and second modules 102 are configured to send the interactive location information to the target user equipment, receive the annotation information transmitted by the target user equipment and returned based on the interactive location information, thereby determining the The image annotation information of the top image information, wherein the image annotation information includes the annotation information and the interactive position information, and the annotation information is determined by the first user operation of the target user.
  • the computer device further includes a distance measurement device; wherein the device further includes a thirteenth module (not shown) for determining the distance between the current user and the computer through the distance measurement device. Distance information between devices. If the distance information does not meet the preset distance threshold, the computer device sends a notification.
  • FIG. 5 shows a target user device 200 for projection interaction according to one aspect of the present application, which mainly includes a two-one module 201, a two-two module 202 and a two-three module 203.
  • the second module 201 is used to receive and present the top image information transmitted by the corresponding computer device and collected by the corresponding top camera;
  • the second module 202 is used to obtain the first user operation of the target user corresponding to the target user equipment, based on the
  • the first user operation generates annotation information about the top image information;
  • the second and third modules 203 are used to return the annotation information to the computer device for the computer device to present the annotation information through the corresponding projection device.
  • step S201, step S202 and step S203 shown in FIG. 3 are the same or similar to the embodiments of step S201, step S202 and step S203 shown in FIG. 3 , and therefore will not be described again, but is included here by reference.
  • the device further includes a second and fourth module (not shown), used to determine the corresponding annotation location information based on the first user operation; wherein the second and third module 203 is used to combine the annotation information and The annotation location information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device based on the annotation location information.
  • a second and fourth module (not shown), used to determine the corresponding annotation location information based on the first user operation; wherein the second and third module 203 is used to combine the annotation information and The annotation location information is returned to the computer device for the computer device to present the annotation information through a corresponding projection device based on the annotation location information.
  • step S204 the specific implementation manner corresponding to the two or four modules is the same as or similar to the foregoing embodiment of step S204, and therefore will not be described again, but is included here by reference.
  • the present application also provides a computer-readable storage medium that stores computer code.
  • the computer code is executed, as in the previous item The method described is executed.
  • This application also provides a computer program product.
  • the computer program product is executed by a computer device, the method described in the previous item is executed.
  • This application also provides a computer device, which includes:
  • processors one or more processors
  • Memory for storing one or more computer programs
  • the one or more processors When the one or more computer programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any one of the preceding items.
  • FIG. 6 illustrates an exemplary system that may be used to implement various embodiments described in this application
  • system 300 can serve as any of the above-mentioned devices in each of the described embodiments.
  • system 300 may include one or more computer-readable media (eg, system memory or NVM/storage device 320) having instructions coupled thereto and configured to perform Instructions are provided to implement means for one or more processors (eg, processor(s) 305) to perform the actions described herein.
  • processors eg, processor(s) 305
  • system control module 310 may include any suitable interface controller to provide information to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310 Any appropriate interface.
  • System control module 310 may include a memory controller module 330 to provide an interface to system memory 315 .
  • Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
  • System memory 315 may be used, for example, to load and store data and/or instructions for system 300 .
  • system memory 315 may include any suitable volatile memory, such as suitable DRAM.
  • system memory 315 may include double data rate type quad synchronous dynamic random access memory (DDR4SDRAM).
  • DDR4SDRAM double data rate type quad synchronous dynamic random access memory
  • system control module 310 may include one or more input/output (I/O) controllers to provide interfaces to NVM/storage device 320 and communication interface(s) 325 .
  • I/O input/output
  • NVM/storage device 320 may be used to store data and/or instructions.
  • NVM/storage device 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more hard drives (e.g., one or more hard drives) HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • suitable non-volatile memory e.g., flash memory
  • suitable non-volatile storage device(s) e.g., one or more hard drives (e.g., one or more hard drives) HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives.
  • NVM/storage device 320 may include storage resources that are physically part of the device on which system 300 is installed, or that may be accessed by the device without necessarily being part of the device. For example, NVM/storage device 320 may be accessed over the network via communication interface(s) 325 .
  • Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device.
  • System 300 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
  • At least one of the processor(s) 305 may be packaged with the logic of one or more controllers of the system control module 310 (eg, memory controller module 330). For one embodiment, at least one of the processor(s) 305 may be packaged together with the logic of one or more controllers of the system control module 310 to form a system-in-package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die as the logic of the one or more controllers of the system control module 310 . For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with the logic of one or more controllers of the system control module 310 to form a system on a chip (SoC).
  • SoC system on a chip
  • system 300 may be, but is not limited to, a server, a workstation, a desktop computing device, or a mobile computing device (eg, laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or a different architecture. For example, in some embodiments, system 300 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuits
  • the present application may be implemented in software and/or a combination of software and hardware, for example, using an application specific integrated circuit (ASIC), a general purpose computer or any other similar hardware device.
  • the software program of the present application can be executed by a processor to implement the steps or functions described above.
  • the software program of the present application (including related data structures) may be stored in a computer-readable recording medium, such as a RAM memory, a magnetic or optical drive or a floppy disk and similar devices.
  • some steps or functions of the present application may be implemented using hardware, for example, as a circuit that cooperates with a processor to perform each step or function.
  • part of the present application may be applied as a computer program product, such as computer program instructions.
  • a computer program product such as computer program instructions.
  • methods and/or technical solutions according to the present application may be invoked or provided.
  • the form in which computer program instructions exist in a computer-readable medium includes but is not limited to source files, executable files, installation package files, etc.
  • the manner in which computer program instructions are executed by a computer includes but is not limited to Limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. program.
  • the computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by the computer.
  • Communication media includes the medium whereby communication signals containing, for example, computer readable instructions, data structures, program modules or other data are transmitted from one system to another system.
  • Communication media may include conducted transmission media, such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (unguided transmission) media that can propagate energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared .
  • Computer readable instructions, data structures, program modules, or other data may be embodied, for example, as a modulated data signal in a wireless medium, such as a carrier wave or a similar mechanism such as that embodied as part of spread spectrum technology.
  • modulated data signal refers to a signal in which one or more characteristics are altered or set in a manner that encodes information in the signal. Modulation can be analog, digital or hybrid modulation techniques.
  • computer-readable storage media may include volatile and nonvolatile, removable, storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Removable and non-removable media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM ,EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other media now known or developed in the future that can be stored for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM, SRAM
  • non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM ,EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks,
  • one embodiment according to the present application includes a device, the device includes a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, triggering
  • the device operates based on the aforementioned methods and/or technical solutions according to multiple embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente demande vise à fournir un procédé d'interaction de projection, un dispositif, un support et un produit de programme. Le procédé comprend de manière précise : la collecte d'informations d'image supérieure correspondantes au moyen d'un appareil de caméra supérieure, et la transmission des informations d'image supérieure à un équipement utilisateur cible correspondant, de sorte que l'équipement utilisateur cible présente les informations d'image supérieure ; l'acquisition d'informations d'annotation d'image, concernant les informations d'image supérieure, d'un utilisateur cible correspondant à l'équipement utilisateur cible, les informations d'annotation d'image comprenant des informations d'annotation et des informations de position d'annotation correspondantes des informations d'annotation, et les informations d'annotation étant déterminées par une première opération d'utilisateur de l'utilisateur cible ; et la détermination d'informations de position de projection correspondantes sur la base des informations de position d'annotation, et la projection ainsi que la présentation des informations d'annotation sur la base des informations de position de projection. Au moyen de la présente demande, une interaction intéressante peut être effectuée pour un utilisateur actuel, et une interaction de réalité augmentée plus réelle et naturelle peut également être assurée pour l'utilisateur actuel.
PCT/CN2022/095921 2022-03-11 2022-05-30 Procédé d'interaction de projection, dispositif, support et produit de programme WO2023168836A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210241557.9 2022-03-11
CN202210241557 2022-03-11

Publications (1)

Publication Number Publication Date
WO2023168836A1 true WO2023168836A1 (fr) 2023-09-14

Family

ID=83514634

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/095921 WO2023168836A1 (fr) 2022-03-11 2022-05-30 Procédé d'interaction de projection, dispositif, support et produit de programme

Country Status (2)

Country Link
CN (1) CN115185437A (fr)
WO (1) WO2023168836A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577788A (zh) * 2012-07-19 2014-02-12 华为终端有限公司 增强现实的实现方法和装置
CN111752376A (zh) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 一种基于影像获取的标注系统
CN111757074A (zh) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 一种影像共享标注系统
CN113096003A (zh) * 2021-04-02 2021-07-09 北京车和家信息技术有限公司 针对多视频帧的标注方法、装置、设备和存储介质
US20220078384A1 (en) * 2020-09-10 2022-03-10 Seiko Epson Corporation Information generation method, information generation system, and non- transitory computer-readable storage medium storing program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107689082B (zh) * 2016-08-03 2021-03-02 腾讯科技(深圳)有限公司 一种数据投影方法以及装置
CN110138831A (zh) * 2019-03-29 2019-08-16 亮风台(上海)信息科技有限公司 一种进行远程协助的方法与设备
CN111988493B (zh) * 2019-05-21 2021-11-30 北京小米移动软件有限公司 交互处理方法、装置、设备及存储介质
CN112231023A (zh) * 2019-07-15 2021-01-15 北京字节跳动网络技术有限公司 一种信息显示方法、装置、设备及存储介质
CN113741698B (zh) * 2021-09-09 2023-12-15 亮风台(上海)信息科技有限公司 一种确定和呈现目标标记信息的方法与设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577788A (zh) * 2012-07-19 2014-02-12 华为终端有限公司 增强现实的实现方法和装置
CN111752376A (zh) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 一种基于影像获取的标注系统
CN111757074A (zh) * 2019-03-29 2020-10-09 福建天泉教育科技有限公司 一种影像共享标注系统
US20220078384A1 (en) * 2020-09-10 2022-03-10 Seiko Epson Corporation Information generation method, information generation system, and non- transitory computer-readable storage medium storing program
CN113096003A (zh) * 2021-04-02 2021-07-09 北京车和家信息技术有限公司 针对多视频帧的标注方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN115185437A (zh) 2022-10-14

Similar Documents

Publication Publication Date Title
US11868543B1 (en) Gesture keyboard method and apparatus
US11640235B2 (en) Additional object display method and apparatus, computer device, and storage medium
WO2023035829A1 (fr) Procédé de détermination et de présentation d'informations sur des marques cibles et appareil associé
US8976135B2 (en) Proximity-aware multi-touch tabletop
JP6129863B2 (ja) 3次元タッチタイプ入力システム及び光ナビゲーション方法
JP2014533347A (ja) レーザー深度マップの範囲拡張方法
US9632592B1 (en) Gesture recognition from depth and distortion analysis
US20170068417A1 (en) Information processing apparatus, program, information processing method, and information processing system
US11935294B2 (en) Real time object surface identification for augmented reality environments
WO2021135288A1 (fr) Procédé de commande tactile pour dispositif d'affichage, dispositif terminal et support d'informations
WO2020103657A1 (fr) Procédé et appareil de lecture de fcihier vidéo, et support de stockage
US9547370B2 (en) Systems and methods for enabling fine-grained user interactions for projector-camera or display-camera systems
US11853651B2 (en) Method to determine intended direction of a vocal command and target for vocal interaction
CN109656364B (zh) 一种用于在用户设备上呈现增强现实内容的方法与设备
US9304582B1 (en) Object-based color detection and correction
WO2020192175A1 (fr) Procédé et appareil d'étiquetage de graphe tridimensionnel, dispositif, et support
TWI656359B (zh) 用於混合實境之裝置
KR20170040222A (ko) 반사 기반 제어 활성화 기법
US11057549B2 (en) Techniques for presenting video stream next to camera
WO2023168836A1 (fr) Procédé d'interaction de projection, dispositif, support et produit de programme
US11769293B2 (en) Camera motion estimation method for augmented reality tracking algorithm and system therefor
CN114513689A (zh) 一种遥控方法、电子设备及系统
TWI394063B (zh) 應用影像辨識之指令輸入系統以及方法
US20160091966A1 (en) Stereoscopic tracking status indicating method and display apparatus
US10289203B1 (en) Detection of an input object on or near a surface

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22930465

Country of ref document: EP

Kind code of ref document: A1