CN116382487B - Interaction system is felt to wall body that runs accompany - Google Patents

Interaction system is felt to wall body that runs accompany Download PDF

Info

Publication number
CN116382487B
CN116382487B CN202310518655.7A CN202310518655A CN116382487B CN 116382487 B CN116382487 B CN 116382487B CN 202310518655 A CN202310518655 A CN 202310518655A CN 116382487 B CN116382487 B CN 116382487B
Authority
CN
China
Prior art keywords
motion
scene
user
limiting
wall
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310518655.7A
Other languages
Chinese (zh)
Other versions
CN116382487A (en
Inventor
王海艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Weiadier Information Technology Co ltd
Original Assignee
Beijing Weiadier Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Weiadier Information Technology Co ltd filed Critical Beijing Weiadier Information Technology Co ltd
Priority to CN202310518655.7A priority Critical patent/CN116382487B/en
Publication of CN116382487A publication Critical patent/CN116382487A/en
Application granted granted Critical
Publication of CN116382487B publication Critical patent/CN116382487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Manipulator (AREA)
  • Position Input By Displaying (AREA)

Abstract

The application relates to a running wall body feeling interaction system which comprises a motion capture device, a control device and a control device, wherein the motion capture device is used for capturing the motion state of a user and outputting the motion state; the processor is connected with the motion capture device and is used for receiving the motion module and processing the motion module to obtain a motion scene and outputting the motion scene; and the display device is connected with the processor and is used for receiving the motion scene and projecting and displaying the motion scene. The application has the effect of enhancing the experience of the user.

Description

Interaction system is felt to wall body that runs accompany
Technical Field
The application relates to the field of somatosensory interaction, in particular to a running wall body sensory interaction system.
Background
A companion device is a device that interacts with a user.
The existing running accompanying equipment only has light following, namely, when a user runs, infrared induction is conducted to a human body, and then a light is turned on to follow. The infrared sensor is built in, the human body has constant body temperature and is generally at 37 ℃, so that the infrared sensor can emit infrared rays with specific wavelength of 10um, the passive infrared probe detects the infrared rays emitted by the human body to work, the infrared rays are enhanced by the Fei-nier filter and then gathered on the infrared sensor, the infrared sensor usually adopts a pyroelectric element, the element can lose charge balance when receiving the change of infrared radiation temperature, the charge is released outwards, the output high level is connected with a singlechip, and the singlechip controls an IO port to open an LED spotlight, so that the LED spotlight can be moved along with people.
The existing accompanying equipment has no figure following function, and has poor interaction effect with a user, so that the user experience is poor.
Disclosure of Invention
In order to enhance the experience of a user, the application provides a partner running wall body feeling interaction system.
The application provides a accompany running wall body sense interaction system.
A running wall feel interactive system, comprising;
the motion capture device is used for capturing the motion state of a user and outputting the motion state;
the processor is connected with the motion capture device and is used for receiving the motion module and processing the motion module to obtain a motion scene and outputting the motion scene;
and the display device is connected with the processor and is used for receiving the motion scene and projecting and displaying the motion scene.
Optionally, the developing device includes;
the display screen is used for displaying the motion scene;
the lifting assembly is used for moving the display screen up and down;
the support wall is used for installing the display screen and the lifting assembly.
Optionally, a groove is formed in one end, close to the user, of the supporting wall, the groove is formed in the vertical direction, and the display screen is placed in the groove.
Optionally, the lifting assembly includes;
the fixed block is fixedly arranged on the supporting wall;
the fixing pipes are vertically arranged and fixedly connected with the fixing blocks, a plurality of through holes are formed in the fixing pipes, and the through holes are formed in the fixing pipes at equal intervals along the arrangement direction of the fixing pipes;
one end of the lifting rod is fixedly connected with the display screen, the other end of the lifting rod is inserted into the fixing tube, a limiting assembly is arranged on the fixing tube, one end of the limiting assembly is fixedly connected with the fixing tube, and the other end of the limiting assembly penetrates through the through hole;
and one end of the extension spring is fixedly connected with the supporting wall, and the other end of the extension spring is hooked and connected with the display screen.
Optionally, the spacing groove has been seted up on the circumference wall of lifter, spacing subassembly includes spacing spring and stopper, spacing spring is located the spacing inslot, and with spacing inslot bottom fixed connection, stopper one end and spacing spring deviate from the one end fixed connection of spacing inslot bottom, and the other end passes the through-hole.
Optionally, one end of the limiting block deviating from the limiting spring is in an inclined plane shape, and one end of the limiting block in an inclined plane shape faces downwards.
Optionally, the processor is configured to;
receiving the motion state of the user output by the motion capture device;
constructing a space coordinate system, and determining the space coordinates of each joint of the user according to the action state;
constructing a virtual character according to the space coordinates;
calling a virtual scene based on a preset scene library;
and fusing the virtual character and the virtual scene to obtain a motion scene.
Optionally, before receiving the motion state of the user output by the motion capture device, the method includes;
setting a preset distance threshold;
if the distance from the user to the equipment for acquiring the action state is within a preset distance threshold, acquiring the action state of the user;
and if the distance from the user to the equipment for acquiring the action state is not within the preset distance threshold, not acquiring the action state of the user.
Optionally, the fusing the virtual character and the virtual scene to obtain a motion scene, including;
the action state of the virtual character is called, and is tracked, so that a tracking result is obtained;
and placing the tracking result in the virtual scene in real time to obtain a motion scene.
Optionally, the step of tracking the action state of the virtual character to obtain a tracking result includes;
extracting an image of the virtual character and calculating a human body image value;
calculating the similarity value between the human body image value of the current frame and the human body image value of the previous frame;
and if the similarity value reaches a preset value, the human body image value of the current frame is a tracking result.
In summary, the present application includes at least one of the following beneficial technical effects:
1. by constructing a space coordinate system, the space coordinates of each joint of the user are determined according to the action state of the user, then a virtual character is constructed, the constructed virtual character is fused with a virtual scene to obtain a motion scene, finally, the projection value of the motion scene is displayed on a display device, the user can have a shadow following in the running process, and meanwhile, the virtual scene can also make the user happy, so that the effect of enhancing the experience of the user is achieved.
2. When the display screen is installed and disassembled, the lifting assembly can achieve the effect of saving labor.
Drawings
FIG. 1 is a cross-sectional view of the overall structure of an embodiment of the present application.
Fig. 2 is an enlarged view at a in fig. 1.
Fig. 3 is a system schematic diagram of a system for interacting with a wall body feeling during a race according to an embodiment of the application.
Fig. 4 is a flow chart of a method for interacting with a wall sense during a race according to an embodiment of the application.
Reference numerals illustrate: 10. a motion capture device; 20. a processor; 30. a developing device; 31. a display screen; 32. a lifting assembly; 321. a fixed block; 322. a fixed tube; 3221. a through hole; 323. a lifting rod; 3231. a limit groove; 3232. a limit component; 32321. a limit spring; 32322. a limiting block; 324. a tension spring; 33. supporting walls; 331. a groove.
Detailed Description
The present application will be described in further detail with reference to the accompanying drawings.
The present embodiment is only for explanation of the present application and is not to be construed as limiting the present application, and modifications to the present embodiment, which may not creatively contribute to the present application as required by those skilled in the art after reading the present specification, are all protected by patent laws within the scope of claims of the present application.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides a partner running wall body feeling interaction system which can improve the experience feeling of a user in the running process.
Referring to fig. 1, 2 and 3, a partner running wall sensory interaction system includes a motion capture device 10, a processor 20 and a visualization device 30.
The motion capture device 10 may capture a motion state of a user and send the captured motion state of the user to the processor 20. The motion capture device 10 may employ an infrared sensor, a video camera, or a thermal camera, which are capable of capturing a clear, real-time motion state.
The processor 20 is connected to the motion capture device 10, the processor 20 generates a virtual character after receiving the motion state of the user, the motion state of the virtual character is consistent with the motion state of the user, then the virtual scene is randomly fetched from the preset scene library, the processor 20 fuses the motion state of the virtual character and the virtual scene, that is, fuses the motion state of the user and the virtual scene, further obtains a motion scene, and sends the motion scene to the display device 30. The virtual scene is used, so that the mood of the user in the running process can be improved, and the experience of the user is enhanced. It is to be noted that a plurality of different virtual scenes are stored in the preset scene library, and the plurality of virtual scenes are all scenes meeting the running exercise of the user. In an embodiment of the present application, the processor 20 employs an MCU controller.
The display device 30 is connected to the processor 20, and the display device 30 is configured to receive the motion scene and perform projection display on the motion scene. The display device 30 comprises a display screen 31, a lifting assembly 32 and a supporting wall 33, wherein the display screen 31 is used for displaying a moving scene, the display screen 31 can be an LED electronic combination screen, and the LED electronic combination screen is formed by sequentially splicing a plurality of LED screens; the lifting assembly 32 is used for moving the display screen 31 up and down; the supporting wall 33 is used for installing the display screen 31 and the lifting component 32, the end, close to a user, of the supporting wall 33 is provided with the groove 331, the groove 331 is formed in the vertical direction, the LED electronic combination screen can be placed in the groove 331, the LED electronic combination screen can be limited and fixed, the lifting component is installed above the LED electronic combination screen, and the lifting component is fixedly connected with the top end of the LED electronic combination screen and the supporting wall 33 respectively.
The supporting component is provided with a plurality of, and a plurality of supporting components are installed on the wall body along the horizontal direction equidistance. The support assembly includes a fixing block 321, a fixing tube 322, a lifting rod 323, and a tension spring 324. Wherein, the fixed block 321 is fixedly arranged on the groove 331 of the wall body; the fixed pipe 322 is arranged on the fixed block 321, the fixed pipe 322 is vertically arranged, the inside is of a middle-through structure, a plurality of through holes 3221 are formed in the fixed pipe 322, the through holes 3221 are uniformly distributed on two opposite sides of the outer wall of the fixed pipe 322, and the through holes 3221 are formed in an equidistant manner along the arrangement direction of the fixed pipe 322; one end of the lifting rod 323 is fixedly connected with the top end of the LED electronic combined screen, the other end of the lifting rod 323 is inserted into the fixed tube 322, the outer wall of the lifting rod 323 is abutted against the inner wall of the fixed tube 322, a limiting groove 3231 corresponding to the through hole 3221 is formed in the lifting rod 323, a limiting component 3232 is further arranged on the lifting rod 323, the limiting component 3232 is used for connecting the fixed tube 322 and the lifting rod 323, the limiting component 3232 comprises a limiting block 32322 and a limiting spring 32321, the limiting spring 32321 is placed in the limiting groove 3231, one end of the limiting spring 32321 is fixedly connected with the bottom of the limiting groove 3231, the other end of the limiting spring 32321 is fixedly connected with the limiting block 32322, the limiting block 32322 can slide in the limiting groove 3231, one end of the limiting block 32322, which is away from the limiting spring 32321, faces downwards; one end of the extension spring 324 is fixedly connected with the wall body, and the other end of the extension spring is hooked and connected with the top end of the LED electronic combination screen.
When the LED electronic combination screen is installed, workers comprehensively consider local weather conditions and audience groups, and then control the height of the LED electronic combination screen. In the process of installing the LED electronic combined screen, the user does not need to climb up, the topmost end of the LED electronic combined screen is hooked with the tension spring 324, meanwhile, the lifting rod 323 is aligned with the fixed pipe 322, a worker lifts the LED electronic combined screen upwards, the lifting rod 323 is inserted into the fixed pipe 322, after the worker reaches a preset position, the limiting block 32322 slides out of the limiting groove 3231, the limiting block 32322 penetrates through the through hole 3221, and the fixed pipe 322 is connected with the lifting rod 323. When the LED electronic combination screen needs to be moved downwards, a worker presses the limiting block 32322, so that the limiting block 32322 slides into the limiting groove 3231, and the tension spring 324 can apply an upward force to the LED electronic combination screen at the moment, so that the LED electronic combination screen moves downwards slowly, and the effect of holding back the LED electronic combination screen is achieved.
Referring to fig. 3 and 4, a method for interacting with a wall feeling of a running is applied to the processor 20, and includes:
step S100: the motion state of the user output by the motion capture device is received.
Specifically, if all users are subjected to motion state capturing, the imaging device 30 will be disordered when performing imaging, in order to make the imaging device 30 more beautiful and reasonable in the working process, which users are running users are first judged, so that a preset range threshold is set, the distance from the user to the motion capturing device 10 is calculated, if the distance from the user to the motion capturing device 10 is within the preset range threshold, the motion state of the user is obtained, and if the distance from the user to the motion capturing device 10 is not within the preset range threshold, the motion state of the user is not obtained. In the embodiment of the application, the preset distance threshold is set according to actual conditions.
The motion capture device 10 can capture the motion state of the user and send the motion state to the processor 20, so that the processor 20 can obtain the motion state of the user and further process the motion state according to the motion state of the user.
Step S200: and constructing a space coordinate system, and determining the space coordinates of each joint of the user according to the action state.
Specifically, a three-dimensional space coordinate system is established, and the action state of the user is mapped in the three-dimensional space coordinate system to obtain the space coordinates of each joint of the user.
Step S300: and constructing the virtual character according to the space coordinates.
Specifically, the spatial coordinates of each joint in the motion state of the user are known, and the spatial coordinates are amplified according to a preset proportion until the connection line of the spatial coordinates can construct the virtual character.
Step S400: and calling the virtual scene based on the preset scene library.
Specifically, the virtual character is constructed as described above, and it is known that the motion state of the virtual character is identical to the motion state of the user. In order to improve the experience of the user in the running process, the mood of the user is pleasant, the virtual scenes are called in the preset scene library, a plurality of virtual scenes with different styles are stored in the preset scene library, and the requirements of the user in running can be met by the plurality of virtual scenes.
Step S500: and fusing the virtual character and the virtual scene to obtain a motion scene.
Specifically, the action state of the virtual character is called, and the action state of the virtual character is tracked, so that a tracking result is obtained. And placing the tracking result in the virtual scene in real time to obtain a motion scene, wherein the motion scene is a scene of running the virtual character in the virtual scene.
It can be known that in the process of tracking the motion state of the virtual character, firstly, an image of the virtual character is extracted, and a human body image value is calculated; and then, calculating the similarity between the human body image value of the current frame and the human body image value of the previous frame, if the similarity reaches a preset value, the human body image value of the current frame is a tracking result, and if the similarity does not reach the preset value, reconstructing the virtual character for the user.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the application is not limited to the specific combinations of the features described above, but also covers other embodiments which may be formed by any combination of the features described above or their equivalents without departing from the spirit of the application. Such as the above-mentioned features and the technical features having similar functions (but not limited to) applied for in the present application are replaced with each other.

Claims (7)

1. An interactive system is felt to wall body of accompanying race, its characterized in that: comprises the following steps of;
a motion capture device (10), wherein the motion capture device (10) is used for capturing the motion state of a user and outputting the motion state;
the processor (20) is connected with the motion capture device (10) and is used for receiving and processing the motion state to obtain a motion scene and outputting the motion scene;
the display device (30) is connected with the processor (20) and used for receiving the motion scene and projecting and displaying the motion scene;
the processor (20) is configured to;
receiving an action state of a user output by the action capturing device (10);
constructing a space coordinate system, and determining the space coordinates of each joint of the user according to the action state;
constructing a virtual character according to the space coordinates;
calling a virtual scene based on a preset scene library;
fusing the virtual character and the virtual scene to obtain a motion scene;
the fusing of the virtual character and the virtual scene to obtain a motion scene comprises the following steps:
the action state of the virtual character is called, and is tracked, so that a tracking result is obtained;
placing the tracking result in a virtual scene in real time to obtain a motion scene;
the step of tracking the action state of the virtual character to obtain a tracking result comprises the following steps:
extracting an image of the virtual character and calculating a human body image value;
calculating the similarity value between the human body image value of the current frame and the human body image value of the previous frame;
and if the similarity value reaches a preset value, the human body image value of the current frame is a tracking result.
2. The interactive system for a companion running wall feel of claim 1, wherein: the developing device (30) includes;
-a display screen (31) for displaying the sports scene;
the lifting assembly (32) is used for moving the display screen (31) up and down;
and the supporting wall (33) is used for installing the display screen (31) and the lifting assembly (32).
3. The interactive system for a companion running wall feel of claim 2, wherein: the support wall (33) is close to one end of the user and is provided with a groove (331), the groove (331) is formed in the vertical direction, and the display screen (31) is placed in the groove (331).
4. A running wall sensory interaction system according to claim 3, wherein: the lifting assembly (32) comprises;
the fixed block (321), the fixed block (321) is fixedly installed on the supporting wall (33);
the fixing pipe (322) is vertically arranged and fixedly connected with the fixing block (321), a plurality of through holes (3221) are formed in the fixing pipe (322), and the through holes (3221) are formed in the fixing pipe (322) at equal intervals along the arrangement direction;
the lifting rod (323), one end of the lifting rod (323) is fixedly connected with the display screen (31), the other end of the lifting rod is inserted into the fixed pipe (322), a limiting component (3232) is arranged on the fixed pipe (322), one end of the limiting component (3232) is fixedly connected with the fixed pipe (322), and the other end of the limiting component penetrates through the through hole (3221);
and one end of the extension spring (324) is fixedly connected with the supporting wall (33), and the other end of the extension spring (324) is hooked and connected with the display screen (31).
5. The interactive system for a companion running wall feel of claim 4 wherein: limiting grooves (3231) are formed in the circumferential wall of the lifting rod (323), each limiting assembly (3232) comprises a limiting spring (32321) and a limiting block (32322), each limiting spring (32321) is located in each limiting groove (3231) and fixedly connected with the bottom of each limiting groove (3231), one end of each limiting block (32322) is fixedly connected with one end, deviating from the corresponding limiting groove (3231), of each limiting spring (32321), and the other end of each limiting block penetrates through the corresponding through hole (3221).
6. The interactive system for a companion running wall feel of claim 5 wherein: one end of the limiting block (32322) deviating from the limiting spring (32321) is in an inclined plane shape, and one end of the limiting block (32322) in an inclined plane shape faces downwards.
7. The interactive system for a companion running wall feel of claim 1, wherein: -said receiving an action state of a user output by said action capturing means (10);
setting a preset distance threshold;
if the distance from the user to the equipment for acquiring the action state is within a preset distance threshold, acquiring the action state of the user;
and if the distance from the user to the equipment for acquiring the action state is not within the preset distance threshold, not acquiring the action state of the user.
CN202310518655.7A 2023-05-09 2023-05-09 Interaction system is felt to wall body that runs accompany Active CN116382487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310518655.7A CN116382487B (en) 2023-05-09 2023-05-09 Interaction system is felt to wall body that runs accompany

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310518655.7A CN116382487B (en) 2023-05-09 2023-05-09 Interaction system is felt to wall body that runs accompany

Publications (2)

Publication Number Publication Date
CN116382487A CN116382487A (en) 2023-07-04
CN116382487B true CN116382487B (en) 2023-12-12

Family

ID=86967698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310518655.7A Active CN116382487B (en) 2023-05-09 2023-05-09 Interaction system is felt to wall body that runs accompany

Country Status (1)

Country Link
CN (1) CN116382487B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105183147A (en) * 2015-08-03 2015-12-23 众景视界(北京)科技有限公司 Head-mounted smart device and method thereof for modeling three-dimensional virtual limb
CN108845670A (en) * 2018-06-27 2018-11-20 苏州馨镜家园软件科技有限公司 A kind of online virtual fitness entertainment systems and method based on somatosensory device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3535645B1 (en) * 2016-11-03 2023-07-26 Zimmer US, Inc. Augmented reality therapeutic movement display and gesture analyzer
US11395940B2 (en) * 2020-10-07 2022-07-26 Christopher Lee Lianides System and method for providing guided augmented reality physical therapy in a telemedicine platform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105183147A (en) * 2015-08-03 2015-12-23 众景视界(北京)科技有限公司 Head-mounted smart device and method thereof for modeling three-dimensional virtual limb
CN108845670A (en) * 2018-06-27 2018-11-20 苏州馨镜家园软件科技有限公司 A kind of online virtual fitness entertainment systems and method based on somatosensory device

Also Published As

Publication number Publication date
CN116382487A (en) 2023-07-04

Similar Documents

Publication Publication Date Title
US10803659B2 (en) Automatic three-dimensional solid modeling method and program based on two-dimensional drawing
US11252329B1 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
CN102694969B (en) Image processing device and image processing method
CN109816745B (en) Human body thermodynamic diagram display method and related products
US20220224833A1 (en) Automated Determination Of Image Acquisition Locations In Building Interiors Using Multiple Data Capture Devices
US9595294B2 (en) Methods, systems and apparatuses for multi-directional still pictures and/or multi-directional motion pictures
WO2010070882A1 (en) Information display device and information display method
CN108594999B (en) Control method and device for panoramic image display system
US9392248B2 (en) Dynamic POV composite 3D video system
CN105518584A (en) Recognizing interactions with hot zones
CN105898246A (en) Smart home system
CN105960801A (en) Enhancing video conferences
CN114387445A (en) Object key point identification method and device, electronic equipment and storage medium
US20210286432A1 (en) Information processing device, information processing method, and program
CN116382487B (en) Interaction system is felt to wall body that runs accompany
Davis et al. A robust human-silhouette extraction technique for interactive virtual environments
WO2009016624A2 (en) System and method employing thermal imaging for object detection
JP6680294B2 (en) Information processing apparatus, information processing method, program, and image display system
Clausen et al. Performing Histories: Why the Point Is Not to Make a Point…
CN111064946A (en) Video fusion method, system, device and storage medium based on indoor scene
CN108346183B (en) Method and system for AR reference positioning
KR101036107B1 (en) Emergency notification system using rfid
CN115359381A (en) Smoking behavior identification method and system
KR20220090251A (en) Method and system for providing fusion of concert and exhibition based concexhibition service for art gallery
KR101502505B1 (en) Method for evaluating safety performance of target space and apparatus thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant