CN113284404B - Electronic sand table display method and device based on user actions - Google Patents

Electronic sand table display method and device based on user actions Download PDF

Info

Publication number
CN113284404B
CN113284404B CN202110455460.3A CN202110455460A CN113284404B CN 113284404 B CN113284404 B CN 113284404B CN 202110455460 A CN202110455460 A CN 202110455460A CN 113284404 B CN113284404 B CN 113284404B
Authority
CN
China
Prior art keywords
user
action
actions
amplitude
sand table
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110455460.3A
Other languages
Chinese (zh)
Other versions
CN113284404A (en
Inventor
宫闻丰
冯韶云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jiuwu Digital Technology Co ltd
Original Assignee
Guangzhou Jiuwu Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Jiuwu Digital Technology Co ltd filed Critical Guangzhou Jiuwu Digital Technology Co ltd
Priority to CN202110455460.3A priority Critical patent/CN113284404B/en
Publication of CN113284404A publication Critical patent/CN113284404A/en
Application granted granted Critical
Publication of CN113284404B publication Critical patent/CN113284404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B25/00Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F19/00Advertising or display means not otherwise provided for
    • G09F19/12Advertising or display means not otherwise provided for using special optical effects
    • G09F19/18Advertising or display means not otherwise provided for using special optical effects involving the use of optical projection means, e.g. projection of images on clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Accounting & Taxation (AREA)
  • Marketing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an electronic sand table display method and device based on user actions, wherein the method is applied to an electronic sand table with three-dimensional projection equipment, and comprises the following steps: when the model is displayed and the triggering operation of a user is received, acquiring an area image towards which a display area of the electronic sand table faces; intercepting N user images from the area image; acquiring N user actions from the N user images, and determining N target intents based on the N user actions; and sequencing the N target intents, and controlling the three-dimensional projection equipment to adjust the display angle and the display content of the display model according to the sequencing result. The invention can realize the effect of communication and interaction between the temporary model and audience users so as to meet the watching requirements of the users, and meanwhile, the adjusting mode of the invention can comprise content adjustment and angle adjustment, thereby increasing the showing angle and the showing range, improving the watching desire of the users and further improving the showing effect.

Description

Electronic sand table display method and device based on user actions
Technical Field
The invention relates to the technical field of intelligent display, in particular to an electronic sand table display method and device based on user actions.
Background
The electronic sand table specifically refers to electronic model making and displaying according to a certain proportion according to a topographic map, an aerial photo or a ground topography, and has been applied to various fields such as city planning, building display, scenic spot introduction and the like in recent years.
With the development of science and technology, more and more intelligent sand table display devices are applied to different places, such as exhibition halls. Different scene models or animation models preset by a user can be input into the intelligent sand table display device, and then the intelligent sand table display device displays the scene models or the animation models according to a preset sequence or in a fixed mode.
However, the conventional display mode has the following technical problems: because the show order and the content of different models are that the user presets, when equipment show, the model of show lacks the communication and is interactive with the audience, in case the show time is long, not only reduces audience's the desire to watch and mood easily, also reduces the bandwagon effect easily, because the angle of show is single moreover, and audience need concentrate and watch with a certain place specific place, has also restricted show and watch the region.
Disclosure of Invention
The invention provides an electronic sand table display method and device based on user actions.
The first aspect of the embodiment of the invention provides an electronic sand table display method based on user actions, which is applied to an electronic sand table with three-dimensional projection equipment, and comprises the following steps:
when the model is displayed and the triggering operation of a user is received, acquiring an area image towards which a display area of the electronic sand table faces;
intercepting N user images from the area image, wherein N is a positive integer greater than or equal to 1;
acquiring N user actions from the N user images, and determining N target intents based on the N user actions;
and sequencing the N target intents, and controlling the three-dimensional projection equipment to adjust the display angle and the display content of the display model according to the sequencing result.
In a possible implementation manner of the first aspect, the acquiring N user actions from the N user images includes:
respectively acquiring the user profile characteristics of each user image to obtain N user profile characteristics;
collecting a plurality of joint point coordinates corresponding to each user contour feature;
connecting a plurality of joint point coordinates corresponding to each user contour feature into corresponding user body states to obtain N user body states;
and determining corresponding user actions based on each user posture to obtain N user actions.
In one possible implementation manner of the first aspect, the determining N target intents based on the N user actions includes:
respectively calculating the action difference between each user action and a preset static action to obtain a difference action corresponding to each user action;
calculating action amplitude based on each difference action to obtain N action amplitude values;
and determining the target intention of the user according to each action amplitude value to obtain N target intentions.
In one possible implementation of the first aspect, the target intent includes an angle adjustment, a content adjustment, and a hold;
the determining the target intention of the user according to each action amplitude value comprises the following steps:
when the action amplitude is larger than a preset first amplitude, determining that the target intention of the user is content adjustment;
when the action amplitude is smaller than a preset first amplitude and larger than a preset second amplitude, determining that the target intention of the user is angle adjustment, wherein the preset first amplitude is larger than the preset second amplitude;
and when the action amplitude is smaller than a preset second amplitude, determining that the target intention of the user is kept unchanged.
In a possible implementation manner of the first aspect, the sorting the N target intents includes:
respectively determining the area ratio of each user image to the area image to obtain N area ratios;
and sorting the N target intents according to the numerical value of the N area ratio from large to small.
A second aspect of an embodiment of the present invention provides an electronic sand table display apparatus based on a user action, where the apparatus is applied to an electronic sand table with a three-dimensional projection device, and the apparatus includes:
the acquisition module is used for acquiring an area image towards which a display area of the electronic sand table faces when the model is displayed and the triggering operation of a user is received;
the intercepting module is used for intercepting N user images from the area image, wherein N is a positive integer greater than or equal to 1;
a determination module that collects N user actions from the N user images and determines N target intents based on the N user actions;
and the adjusting module is used for sequencing the N target intents and controlling the three-dimensional projection equipment to adjust the display angle and the display content of the display model according to the sequencing result.
In a possible implementation manner of the second aspect, the determining module is further configured to:
respectively acquiring the user profile characteristics of each user image to obtain N user profile characteristics;
collecting a plurality of joint point coordinates corresponding to each user contour feature;
connecting a plurality of joint point coordinates corresponding to each user contour feature into corresponding user body states to obtain N user body states;
and determining corresponding user actions based on each user posture to obtain N user actions.
In a possible implementation manner of the second aspect, the determining module is further configured to:
respectively calculating the action difference between each user action and a preset static action to obtain a difference action corresponding to each user action;
calculating action amplitude based on each difference action to obtain N action amplitude values;
and determining the target intention of the user according to each action amplitude value to obtain N target intentions.
Compared with the prior art, the electronic sand table display method and device based on the user action, provided by the embodiment of the invention, have the beneficial effects that: the method can acquire the image of the user and determine the posture of the user, determines the intention of the user based on the posture of the user, screens the intention of the user with the highest priority, and finally adjusts the display model according to the intention of the user to realize the effect of communication and interaction between the temporary model and the audience user so as to meet the watching requirements of the user.
Drawings
Fig. 1 is a schematic flowchart of an electronic sand table display method based on user actions according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an electronic sand table display device based on user actions according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The traditional electronic sand table display mode has the following technical problems: because the show order and the content of different models are that the user presets, when equipment show, the model of show lacks the communication and is interactive with the audience, in case the show time is long, not only reduces audience's the desire to watch and mood easily, also reduces the bandwagon effect easily, simultaneously because the angle of show is single, and audience need concentrate and watch with a certain place specific place, has also restricted show and watch the region.
In order to solve the above problem, an electronic sand table display method based on user actions according to the embodiments of the present application will be described and explained in detail through the following specific embodiments.
Referring to fig. 1, a flowchart of an electronic sand table display method based on user actions according to an embodiment of the present invention is shown. The method is applied to the electronic sand table with the three-dimensional projection equipment and the camera. The electronic sand table can be arranged on a wall and can be placed on a table top or a platform.
In this embodiment, the electronic sand table may show an electronic model or animation or scene picture preset by a user.
By way of example, the electronic sand table display method based on user actions may include:
and S11, when the model is displayed and the triggering operation of the user is received, acquiring an area image towards which the display area of the electronic sand table faces.
The electronic sand table can be provided with a touch screen and can be used for controlling the electronic sand table to perform corresponding operation; optionally, the electronic sand table may also be connected to a user terminal, and may perform information interaction with the user terminal.
In specific implementation, the triggering operation may be an operation of touching the electronic sand table by a user, or the user may send different control instructions or connection information to the electronic sand table through the intelligent terminal. The area image is specifically an area towards or shown by a display screen of the electronic sand table, for example, if the electronic sand table is hung on a wall, the facing area is an area facing the front face of the display screen of the electronic sand table; if the electronic sand table is placed on the platform or the bottom surface, the facing area is an area facing upwards of a display screen of the electronic sand table.
When the electronic sand table is in the process of displaying the model, if the triggering operation of the user is received, the electronic sand table can control the camera to acquire the regional image in real time.
In an alternative embodiment, the display area may be adjusted according to the actual needs of the user, and the shooting angle may be increased or decreased based on the orientation area of the display model to obtain the corresponding area image.
And S12, intercepting N user images from the area image, wherein N is a positive integer greater than or equal to 1.
In this embodiment, a face may be recognized from the region image, and N user images may be respectively cut out based on the recognition result of the face.
Specifically, each user image includes at least one user face image and a human body image corresponding to the user, and may include limb movements of the user.
S13, collecting N user actions from the N user images, and determining N target intentions based on the N user actions.
After obtaining the N user images, the motion recognition may be performed on each user image, the user motion corresponding to each user image is obtained, the N user motions are obtained, and then the user intention is determined based on each user motion, so as to obtain the N user intentions, thereby determining whether the adjustment of the progress model is required based on the user intention.
In particular, since the user 'S actions include a plurality of kinds, and there may be a plurality of user images, in order to determine the user' S intention accurately and efficiently, step S13 may include the following sub-steps, as an example:
and a substep S131, respectively obtaining the user profile characteristics of each user image to obtain N user profile characteristics.
In practical operation, the human body contour of each user image may be scanned, and the user contour features may be determined based on the human body contour to obtain N user contour features, where the user contour features may specifically be the contour images of the user.
And a substep S132 of collecting a plurality of joint point coordinates corresponding to each user profile feature.
And acquiring a plurality of joint point coordinates from each user contour feature, wherein each joint point coordinate corresponds to a point coordinate corresponding to each joint of the user in the user contour image.
And a substep S133 of connecting a plurality of joint point coordinates corresponding to each user profile feature into corresponding user postures to obtain N user postures.
And connecting a plurality of joint point coordinates corresponding to each user contour feature to form a user posture corresponding to the user image.
And a substep S134 of determining corresponding user actions based on each user posture to obtain N user actions.
In a specific implementation, each user posture can be input into a preset posture matching model, so as to determine a user action corresponding to each user posture.
For example, if the user is standing and one of the hands faces any direction, it may be determined that the user is moving as a slide or a point; for another example, if the user is in a bent half-seated posture, the user's motion may be determined to be a squat.
After each user action is obtained, the user's intention needs to be determined, and then corresponding adjustment is performed based on the user intention. In order to accurately determine the user intention, step S13 may further include the following sub-steps, as an example:
and a substep S135, calculating motion differences between each of the user motions and a preset static motion, respectively, to obtain a difference motion corresponding to each of the user motions.
In this embodiment, a user may preset a plurality of static actions, then search for a corresponding static action based on the user action, and compare the action difference between the current user action and the corresponding static action time, where the action difference may be an action different between two actions.
Specifically, the static action and the dynamic action may be superimposed, and then the overlapping portion between the two actions is removed, and the non-overlapping portion is retained, so as to obtain the differential action of the two actions.
And a substep S136 of calculating motion amplitude based on each difference motion to obtain N motion amplitude values.
In determining the differential motion, a motion magnitude for each differential motion may be calculated.
Specifically, the coordinate values of the static motion and the coordinate values of the dynamic motion in the differential motion may be obtained respectively, and then the difference between the two coordinate values is calculated to obtain the motion amplitude. After calculating the motion amplitude of each differential motion, N motion amplitudes can be obtained.
And a substep S137, determining the target intention of the user according to each action amplitude value, and obtaining N target intentions.
After obtaining the action magnitude, a target intent of the user may be determined based on the action magnitude.
In an alternative embodiment, the target intent includes an angle adjustment, a content adjustment, and a hold. Specifically, the sub-step S137 may include the following sub-steps:
and step S1371, when the action amplitude is larger than a preset first amplitude, determining that the target intention of the user is content adjustment.
The range of the action amplitude value can be judged, when the action amplitude value is larger than a preset first amplitude value, the action of the user is determined to be large, the user needs to adjust the displayed content, and the target intention of the user is determined to be content adjustment.
The content adjustment may specifically be a switching of presentation models or presentation content. For example, switching from the second presentation to the first presentation or the third presentation.
And a substep S1372, determining that the target intention of the user is angle adjustment when the action amplitude is smaller than a preset first amplitude and larger than a preset second amplitude, wherein the preset first amplitude is larger than the preset second amplitude.
When the action amplitude is smaller than the preset first amplitude and larger than the preset second amplitude, it is determined that the action of the user is small, the user needs to adjust the display angle, and then it is determined that the target intention of the user is angle adjustment.
In particular, the angular adjustment may be an angular adjustment of a temporal model of the electronic sand table. For example, when the electronic sand table is hung on a wall, the model displayed on the front side can be adjusted left and right, or up and down. When the electronic sand table is placed on the platform, the upward display model or animation can be rotated and adjusted towards the left side and the right side.
And S1373, when the action amplitude is smaller than a preset second amplitude, determining that the target intention of the user is kept unchanged.
When the action amplitude is smaller than the preset second amplitude, the action change of the user is determined to be weak and can be ignored, and the target intention of the user can be determined to be kept unchanged.
And S14, sequencing the N target intents, and controlling the three-dimensional projection equipment to adjust the display angle and the display content of the display model according to the sequencing result.
After each target intention corresponding to each user action is obtained, as the target intentions can be one or more, in order to meet the requirements of each user, the target intentions can be ranked and then adjusted according to the ranking result.
In actual operation, if each target intention is acted, the display sequence is disordered, and the display sequence can be adjusted according to the highest priority target intention of the sequencing result.
In order to fit the actual situation of the user and to prioritize accurately, wherein, as an example, step S14 may include the following sub-steps:
and a substep S141 of determining the area ratio of each user image to the region image respectively to obtain N area ratios.
Specifically, after each user image is acquired, the area ratio of each user image to the entire area image may be calculated.
For example, if there are 5 user images with areas of 1, 2, 3, 4, and 5 square centimeters, respectively, and the total area image is 10 square centimeters, the area ratios of the 5 user images are 0.1, 0.2, 0.3, 0.4, and 0.5, respectively.
And a substep S142, sorting the N target intents from large to small according to the numerical value of the N area ratio.
The target intents may then be arranged in order of large to small area ratios.
For example, the area ratio of 5 user images is 0.1, 0.2, 0.3, 0.4, and 0.5, respectively, which correspond to the target intents of content adjustment, hold-constant, and angle adjustment, respectively. After sorting, angle adjustment, unchanged keeping and content adjustment are carried out. And finally, adjusting the display model according to the angle adjustment by taking the angle adjustment as the target intention with the highest priority.
In this embodiment, an embodiment of the present invention provides an electronic sand table display method based on user actions, which has the following beneficial effects: the method can acquire the image of the user and determine the posture of the user, determines the intention of the user based on the posture of the user, screens the intention of the user with the highest priority, and finally adjusts the display model according to the intention of the user to realize the effect of communication and interaction between the temporary model and the audience user so as to meet the watching requirements of the user.
An embodiment of the present invention further provides an electronic sand table display device based on a user action, and referring to fig. 2, a schematic structural diagram of the electronic sand table display device based on a user action according to an embodiment of the present invention is shown. The device is applied to an electronic sand table with three-dimensional projection equipment.
As an example, the electronic sand table display device based on user actions may include:
the acquisition module 201 is used for acquiring an area image towards which a display area of the electronic sand table faces when the model is displayed and a trigger operation of a user is received;
an intercepting module 202, configured to intercept N user images from the area image, where N is a positive integer greater than or equal to 1;
a determining module 203, which collects N user actions from the N user images and determines N target intents based on the N user actions;
and the adjusting module 204 is configured to sort the N target intents, and control the three-dimensional projection device to adjust a display angle and a display content of the display model according to a sorting result.
Further, the method for displaying an electronic sand table based on user actions according to claim 1, wherein the determining module is further configured to:
respectively acquiring the user profile characteristics of each user image to obtain N user profile characteristics;
collecting a plurality of joint point coordinates corresponding to each user contour feature;
connecting a plurality of joint point coordinates corresponding to each user contour feature into corresponding user body states to obtain N user body states;
and determining corresponding user actions based on each user posture to obtain N user actions.
Further, the determining module is further configured to:
respectively calculating the action difference between each user action and a preset static action to obtain a difference action corresponding to each user action;
calculating action amplitude based on each difference action to obtain N action amplitude values;
and determining the target intention of the user according to each action amplitude value to obtain N target intentions.
Further, the target intent includes angle adjustment, content adjustment, and hold;
the determination module is further to:
when the action amplitude is larger than a preset first amplitude, determining that the target intention of the user is content adjustment;
when the action amplitude is smaller than a preset first amplitude and larger than a preset second amplitude, determining that the target intention of the user is angle adjustment, wherein the preset first amplitude is larger than the preset second amplitude;
and when the action amplitude is smaller than a preset second amplitude, determining that the target intention of the user is kept unchanged.
Further, the adjusting module is further configured to:
respectively determining the area ratio of each user image to the area image to obtain N area ratios;
and sorting the N target intents according to the numerical value of the N area ratio from large to small.
Further, an embodiment of the present application further provides an electronic device, including: the electronic sand table display method based on the user action comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the electronic sand table display method based on the user action according to the embodiment.
Further, an embodiment of the present application also provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and the computer-executable instructions are used to enable a computer to execute the electronic sand table display method based on user actions according to the above embodiment.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (7)

1. An electronic sand table display method based on user actions is applied to an electronic sand table with a three-dimensional projection device, and comprises the following steps:
when the model is displayed and the triggering operation of a user is received, acquiring an area image towards which a display area of the electronic sand table faces;
intercepting N user images from the area image, wherein N is a positive integer greater than or equal to 1;
acquiring N user actions from the N user images, and determining N target intents based on the N user actions;
sequencing the N target intents, and controlling the three-dimensional projection equipment to adjust the display angle and the display content of the display model according to a sequencing result;
the determining N target intents based on the N user actions includes:
respectively calculating the action difference between each user action and a preset static action to obtain a difference action corresponding to each user action;
calculating action amplitude based on each difference action to obtain N action amplitude values;
determining the target intention of the user according to each action amplitude value to obtain N target intentions;
the calculation mode of the action amplitude is as follows: respectively obtaining coordinate values of static actions and coordinate values of dynamic actions in the differential actions, and calculating the difference value of the two coordinate values to obtain an action amplitude value;
the ranking the N target intents comprises:
respectively determining the area ratio of each user image to the area image to obtain N area ratios;
and sorting the N target intents according to the numerical value of the N area ratio from large to small.
2. The method for displaying an electronic sand table based on user actions according to claim 1, wherein the collecting N user actions from the N user images comprises:
respectively acquiring the user profile characteristics of each user image to obtain N user profile characteristics;
collecting a plurality of joint point coordinates corresponding to each user contour feature;
connecting a plurality of joint point coordinates corresponding to each user contour feature into corresponding user body states to obtain N user body states;
and determining corresponding user actions based on each user posture to obtain N user actions.
3. The method for electronic sand table presentation based on user actions according to claim 1, wherein the target intention comprises angle adjustment, content adjustment and hold;
the determining the target intention of the user according to each action amplitude value comprises the following steps:
when the action amplitude is larger than a preset first amplitude, determining that the target intention of the user is content adjustment;
when the action amplitude is smaller than a preset first amplitude and larger than a preset second amplitude, determining that the target intention of the user is angle adjustment, wherein the preset first amplitude is larger than the preset second amplitude;
and when the action amplitude is smaller than a preset second amplitude, determining that the target intention of the user is kept unchanged.
4. An electronic sand table display device based on user actions, which is applied to an electronic sand table with a three-dimensional projection device, and comprises:
the acquisition module is used for acquiring an area image towards which a display area of the electronic sand table faces when the model is displayed and the triggering operation of a user is received;
the intercepting module is used for intercepting N user images from the area image, wherein N is a positive integer greater than or equal to 1;
a determination module that collects N user actions from the N user images and determines N target intents based on the N user actions;
the adjusting module is used for sequencing the N target intents and controlling the three-dimensional projection equipment to adjust the display angle and the display content of the display model according to a sequencing result;
the determination module is further to:
respectively calculating the action difference between each user action and a preset static action to obtain a difference action corresponding to each user action;
calculating action amplitude based on each difference action to obtain N action amplitude values;
determining the target intention of the user according to each action amplitude value to obtain N target intentions;
the calculation mode of the action amplitude is as follows: respectively obtaining coordinate values of static actions and coordinate values of dynamic actions in the differential actions, and calculating the difference value of the two coordinate values to obtain an action amplitude value;
the adjustment module is further configured to:
respectively determining the area ratio of each user image to the area image to obtain N area ratios;
and sorting the N target intents according to the numerical value of the N area ratio from large to small.
5. The user-action-based electronic sand table presentation device of claim 4, wherein the determination module is further configured to:
respectively acquiring the user profile characteristics of each user image to obtain N user profile characteristics;
collecting a plurality of joint point coordinates corresponding to each user contour feature;
connecting a plurality of joint point coordinates corresponding to each user contour feature into corresponding user body states to obtain N user body states;
and determining corresponding user actions based on each user posture to obtain N user actions.
6. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the method for electronic sand table presentation based on user actions according to any one of claims 1 to 3 when executing the program.
7. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the method for electronic sand table presentation based on user actions according to any one of claims 1 to 3.
CN202110455460.3A 2021-04-26 2021-04-26 Electronic sand table display method and device based on user actions Active CN113284404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110455460.3A CN113284404B (en) 2021-04-26 2021-04-26 Electronic sand table display method and device based on user actions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110455460.3A CN113284404B (en) 2021-04-26 2021-04-26 Electronic sand table display method and device based on user actions

Publications (2)

Publication Number Publication Date
CN113284404A CN113284404A (en) 2021-08-20
CN113284404B true CN113284404B (en) 2022-04-08

Family

ID=77275741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110455460.3A Active CN113284404B (en) 2021-04-26 2021-04-26 Electronic sand table display method and device based on user actions

Country Status (1)

Country Link
CN (1) CN113284404B (en)

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450729B (en) * 2017-08-10 2019-09-10 上海木木机器人技术有限公司 Robot interactive method and device
CN107728780B (en) * 2017-09-18 2021-04-27 北京光年无限科技有限公司 Human-computer interaction method and device based on virtual robot
CN108733208A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 The I-goal of smart machine determines method and apparatus
CN109947977A (en) * 2019-03-13 2019-06-28 广东小天才科技有限公司 A kind of intension recognizing method and device, terminal device of combination image
CN111950321B (en) * 2019-05-14 2023-12-05 杭州海康威视数字技术股份有限公司 Gait recognition method, device, computer equipment and storage medium
CN110299152A (en) * 2019-06-28 2019-10-01 北京猎户星空科技有限公司 Interactive output control method, device, electronic equipment and storage medium
CN110597251B (en) * 2019-09-03 2022-10-25 三星电子(中国)研发中心 Method and device for controlling intelligent mobile equipment
CN111367580B (en) * 2020-02-28 2024-02-13 Oppo(重庆)智能科技有限公司 Application starting method and device and computer readable storage medium
CN111651052A (en) * 2020-06-10 2020-09-11 浙江商汤科技开发有限公司 Virtual sand table display method and device, electronic equipment and storage medium
CN111966320B (en) * 2020-08-05 2022-02-01 湖北亿咖通科技有限公司 Multimodal interaction method for vehicle, storage medium, and electronic device
CN112099630B (en) * 2020-09-11 2024-04-05 济南大学 Man-machine interaction method for multi-modal intention reverse active fusion
CN112149574A (en) * 2020-09-24 2020-12-29 济南大学 Accompanying robot-oriented intention flexible mapping method and device
CN112163086B (en) * 2020-10-30 2023-02-24 海信视像科技股份有限公司 Multi-intention recognition method and display device
CN112396997B (en) * 2020-11-30 2022-10-25 浙江神韵文化科技有限公司 Intelligent interactive system for shadow sand table

Also Published As

Publication number Publication date
CN113284404A (en) 2021-08-20

Similar Documents

Publication Publication Date Title
CN109960401B (en) Dynamic projection method, device and system based on face tracking
US8081822B1 (en) System and method for sensing a feature of an object in an interactive video display
US10241565B2 (en) Apparatus, system, and method of controlling display, and recording medium
WO2020042970A1 (en) Three-dimensional modeling method and device therefor
JP7026825B2 (en) Image processing methods and devices, electronic devices and storage media
CN105763829A (en) Image processing method and electronic device
US20150172634A1 (en) Dynamic POV Composite 3D Video System
WO2022022029A1 (en) Virtual display method, apparatus and device, and computer readable storage medium
CN102274633A (en) Image display system, image display apparatus, and image display method
WO2014161306A1 (en) Data display method, device, and terminal, and display control method and device
CN104850228A (en) Mobile terminal-based method for locking watch area of eyeballs
CN108960002A (en) A kind of movement adjustment information reminding method and device
WO2019028855A1 (en) Virtual display device, intelligent interaction method, and cloud server
US20130069939A1 (en) Character image processing apparatus and method for footskate cleanup in real time animation
KR101256046B1 (en) Method and system for body tracking for spatial gesture recognition
CN114387445A (en) Object key point identification method and device, electronic equipment and storage medium
CN112206515A (en) Game object state switching method, device, equipment and storage medium
CN113284404B (en) Electronic sand table display method and device based on user actions
WO2021258598A1 (en) Method for adjusting displayed picture, and smart terminal and readable storage medium
CN113470190A (en) Scene display method and device, equipment, vehicle and computer readable storage medium
US11682136B2 (en) Display method, display system and non-transitory computer readable storage medium
CN111179341A (en) Registration method of augmented reality equipment and mobile robot
CN113194329B (en) Live interaction method, device, terminal and storage medium
CN109657078A (en) A kind of exchange method and equipment of AR
CN109976533B (en) Display control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant