WO2023070329A1 - 内容显示方法、内容显示装置、存储介质及电子设备 - Google Patents

内容显示方法、内容显示装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2023070329A1
WO2023070329A1 PCT/CN2021/126493 CN2021126493W WO2023070329A1 WO 2023070329 A1 WO2023070329 A1 WO 2023070329A1 CN 2021126493 W CN2021126493 W CN 2021126493W WO 2023070329 A1 WO2023070329 A1 WO 2023070329A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
viewer
display
determining
writing
Prior art date
Application number
PCT/CN2021/126493
Other languages
English (en)
French (fr)
Other versions
WO2023070329A9 (zh
Inventor
李咸珍
赵天月
管恩慧
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US18/685,197 priority Critical patent/US20240346965A1/en
Priority to PCT/CN2021/126493 priority patent/WO2023070329A1/zh
Priority to CN202180003075.3A priority patent/CN116348840A/zh
Publication of WO2023070329A1 publication Critical patent/WO2023070329A1/zh
Publication of WO2023070329A9 publication Critical patent/WO2023070329A9/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • the present disclosure relates to the field of display technology, and in particular, to a content display method, a content display device, a computer-readable storage medium, and electronic equipment.
  • the present disclosure provides a content display method, a content display device, an electronic device, and a computer-readable storage medium, so as to at least to a certain extent ensure that written content is smoothly conveyed to viewers, and improve the viewer's listening effect.
  • a method for displaying content including: determining the visible area corresponding to the viewer in the display device according to the viewer's spatial position information and viewing angle; The writing area of the device, and determine the display area outside the writing area in the display device as a candidate display area; according to the visible area, determine the candidate display area for the viewer in the A target display area corresponding to the writing area; in the target display area, the writing content of the writing area is displayed.
  • the method further includes: acquiring a depth image of the scene, determining point cloud data of the scene according to the depth image, and creating a three-dimensional space coordinate system according to the point cloud data ; collecting pose information of the viewer, and determining the spatial position information of the viewer in the three-dimensional space coordinate system according to the pose information of the viewer.
  • the determining the viewable area corresponding to the viewer in the display device includes: determining the viewer's viewing area according to the viewer's spatial position information and viewing angle domain; determine the projection area of the viewer's viewing area on the display device; determine the projection area as the viewable area corresponding to the viewer.
  • the determining the viewable area corresponding to the viewer in the display device includes: determining the viewer's viewing area according to the viewer's spatial position information and viewing angle domain; determining a first projection area of the viewer's field of view on the display device; determining a second projection area on the display device of an obstacle between the viewer and the display device ; Determining an area in the first projection area that does not overlap with the second projection area as the viewable area corresponding to the viewer.
  • the determining the writing area of the writer in the display device includes: acquiring the content written by the writer in the display device; The smallest rectangular area is determined as the writing area.
  • the determining a target display area corresponding to the writing area for the viewer in the candidate display area includes: dividing the candidate display area into a plurality of sub-areas Candidate display area; determine the number N of sub-candidate display areas that the target display area needs to contain according to the size of the target display area; evaluate each of the sub-candidate display areas according to the visible area; according to the evaluation results of the evaluation, select The adjacent N sub-candidate display regions are used as the target display regions.
  • the method further includes: determining the size of the target display area according to the size of the writing area; wherein, the size of the target display area is not smaller than the size of the writing area, And the size of the target display area is an integer multiple of the sub-candidate display area.
  • the determining the size of the target display area according to the size of the writing area includes: enlarging the writing area by a preset multiple; determining the target according to the size of the enlarged writing area The size of the display area.
  • the number of the viewers is multiple; the evaluation of each of the sub-candidate display areas according to the visible area includes: according to each of the viewers corresponding Determine the sub-candidate display area corresponding to each of the viewers; for each of the sub-candidate display areas, determine the number of occurrences of the sub-candidate display area in the sub-candidate display area corresponding to each of the viewers ; Determine the evaluation result of the sub-candidate display area according to the number of occurrences.
  • the method further includes: determining the viewer according to a viewer selection instruction.
  • a content display device including: a visible area determination module, configured to determine the visible area corresponding to the viewer in the display device according to the viewer's spatial position information and viewing angle ;
  • the first display area determination module is used to determine the writing area of the writer in the display device, and determine the display area outside the writing area in the display device as a candidate display area;
  • the second display area determination module for determining a target display area corresponding to the writing area for the viewer in the candidate display area according to the visible area;
  • a content display module for displaying the target display area in the target display area Describe the writing content in the writing area.
  • an electronic device including: a processor; and a memory for storing one or more programs, and when the one or more programs are executed by the processor, the The processor implements the methods as provided by some aspects of the present disclosure.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method as provided in some aspects of the present disclosure is implemented.
  • Fig. 1 shows a schematic diagram of an application scenario of a content display method in an embodiment of the present disclosure.
  • Fig. 2 shows a schematic flowchart of a content display method in an embodiment of the present disclosure.
  • Fig. 3 shows a schematic flowchart of determining viewer spatial position information in an embodiment of the present disclosure.
  • Fig. 4 shows a schematic diagram of a three-dimensional space coordinate system in an embodiment of the present disclosure.
  • Fig. 5 shows a schematic diagram of determining pose information corresponding to a viewer in an embodiment of the present disclosure.
  • Fig. 6 shows a schematic flowchart of determining a viewer's visible area in an embodiment of the present disclosure.
  • Fig. 7 shows a schematic diagram of an application scenario of a content display method in an embodiment of the present disclosure.
  • Fig. 8 shows a schematic diagram of a human eye viewing angle in the horizontal direction in an embodiment of the present disclosure.
  • Fig. 9 shows a schematic diagram of a viewing angle in the vertical direction of the human eye in an embodiment of the present disclosure.
  • Fig. 10 shows a schematic diagram of viewer's viewable area in an embodiment of the present disclosure.
  • Fig. 11 shows a schematic diagram of an application scenario of a content display method in an embodiment of the present disclosure.
  • Fig. 12 shows a schematic diagram of an application scenario of a content display method in an embodiment of the present disclosure.
  • Fig. 13 shows a schematic flowchart of determining a viewer's visible area in an embodiment of the present disclosure.
  • Fig. 14 shows a schematic view of viewer's viewable area in an embodiment of the present disclosure.
  • FIG. 15 shows a schematic diagram of a display device in an embodiment of the present disclosure.
  • Fig. 16 shows a schematic flowchart of determining a target display area in an embodiment of the present disclosure.
  • Fig. 17 shows a schematic flowchart of determining the evaluation result of the sub-candidate display area in the embodiment of the present disclosure.
  • Fig. 18 shows a schematic diagram of a display device in an embodiment of the present disclosure.
  • Fig. 19 shows a schematic diagram of a module of a content display device in an embodiment of the present disclosure.
  • FIG. 20 shows a schematic structural diagram of a computer system for implementing the electronic device of the embodiment of the present disclosure.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments may, however, be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of example embodiments to those skilled in the art.
  • the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • Fig. 1 shows a schematic diagram of an exemplary application scenario to which the content display method of the embodiment of the present disclosure can be applied.
  • the display device 100 may be an electronic whiteboard; the electronic whiteboard is an interactive display device with a large display screen; through a specific stylus, writing operations or editing can be performed on the electronic whiteboard , annotation, saving and other operations; the electronic whiteboard can be used in conference display or teaching environment for users to demonstrate or explain.
  • the display device 100 may also be other display devices capable of realizing writing functions and display functions, which is not limited in this exemplary embodiment.
  • the display device 100 may also have a mechanism with an arithmetic processing function, for example, including a built-in processor for realizing the arithmetic processing function, or an external server for realizing the arithmetic processing function.
  • the viewers 300 may be conference participants, students, audiences, etc.; the writers 200 may be conference participants, lecturers, presenters, etc.
  • the writer 200 can write content to be displayed on the display device 100 , such as key points, equations, flow charts, diagrams or other content; the viewer 300 can follow the speaker's handwriting and watch the written content.
  • the writer 200 can often only write in a part of the display device; meanwhile, the viewer 300 is often in a fixed position (such as a meeting around the table 400) to watch; moreover, factors such as other viewers in front of the viewer's seat and the writer 200 near the display device 100 may interfere with the viewer's viewing of the written content, thereby reducing the viewer's listening effect and affecting user experience.
  • this example embodiment provides a method for displaying content; the method can be executed by a built-in or externally connected mechanism of the above-mentioned display device that has a processing function.
  • the content display method provided in this exemplary embodiment may include the following steps:
  • Step S210 according to the viewer's spatial position information and viewing angle, determine the viewable area corresponding to the viewer in the display device.
  • Step S220 determining the writing area of the writer on the display device, and determining a display area in the display device outside the writing area as a candidate display area.
  • Step S230 according to the visible area, determine a target display area corresponding to the writing area for the viewer in the candidate display area.
  • Step S240 in the target display area, display the writing content in the writing area.
  • the display area is determined in combination with the viewer's spatial position information and viewing angle. Therefore, it is more convenient and more suitable for the viewer to view the target display area, thereby improving the viewing experience of the viewer.
  • the spatial position information of the viewer includes, for example, the spatial position of the viewer and the spatial position of some specific features of the viewer, such as the spatial position of the viewer's head or eyes.
  • step S210 the viewable area corresponding to the viewer in the display device is determined according to the viewer's spatial position information and viewing angle.
  • the spatial position information of the viewer may be determined in a variety of different ways. For example, if the application scenario is a place where the position of the viewer is relatively fixed (for example, the seat and the position of the viewer remain unchanged for a long time), the spatial position information of the viewer can be pre-marked, and then the pre-marked viewer can be obtained directly when needed.
  • the spatial location information of if the application scenario is a place where the position of the viewer may change, such as a conference room, a classroom, etc., the spatial position information of the viewer can be obtained in real time or periodically; The spatial position information of the viewer is acquired only after the change occurs; this is not specifically limited in this exemplary embodiment.
  • the spatial position information of the viewer can be determined through the following steps S310 and S320 .
  • Step S310 acquiring a depth image of the scene, determining point cloud data of the scene according to the depth image, and creating a three-dimensional space coordinate system according to the point cloud data.
  • a three-dimensional scanning device such as a laser radar, a stereo camera, a time-of-flight camera, etc.
  • a three-dimensional scanning device can be used to scan the scene, so as to obtain objects in the scene (such as display devices, conference tables, seats, walls, podium, etc.) and the corresponding depth information.
  • feature extraction can be performed on the images of each object in the scene to obtain feature information, and use the feature information to perform visual tracking and motion estimation to obtain intermediate results; then, use the depth information corresponding to the image of each object and the internal parameters of the 3D scanning device Get local point cloud data; finally, use the above intermediate results and local point cloud data to generate global point cloud data, so as to construct the 3D space coordinate system corresponding to the current scene based on the global point cloud data; for example, the 3D space coordinate system corresponding to the current scene
  • the system can be in the form shown in Figure 4.
  • Step S320 collecting pose information of the viewer, and determining the spatial position information of the viewer in the three-dimensional space coordinate system according to the pose information of the viewer.
  • a three-dimensional scanning device such as a laser radar, a stereo camera, a time-of-flight camera, etc.
  • a three-dimensional scanning device can also be used to scan the viewer, so as to obtain images of each viewer and corresponding depth information. Based on the images of each viewer and the corresponding depth information, the pose information of the viewer can be obtained.
  • the pose information corresponding to each viewer can be expressed as an array of N rows and 3 columns (according to the different sensors, the number of columns can also be other numbers), each row corresponds to a single point, in the three-dimensional space
  • the position of the coordinate system is denoted as (x, y, z).
  • other methods may also be used to determine the position of the viewer in the above-mentioned three-dimensional space coordinate system, which also belongs to the protection scope of the present disclosure.
  • the visible area corresponding to the viewer in the display device may be determined through the following steps S610 to S630. in:
  • Step S610 Determine the viewer's viewing area according to the viewer's spatial position information and viewing angle.
  • the angle of the front object that the human eye can see at the same time is called the viewing angle.
  • the viewing angles can be divided into the following categories from small to large:
  • Monocular viewing angle 1 The viewing angle when one eye looks straight ahead, the eyeball cannot turn, and the head cannot turn forward. Taking the right eye as an example, the upper viewing angle is usually 50°, the lower viewing angle is usually 70°, the left viewing angle is usually 56°, and the right viewing angle is usually 100°.
  • Monocular viewing angle 2 the viewing angle when one eye looks straight ahead, the eyeball cannot turn, and the head can turn.
  • the occlusion of the eye socket and nose can be removed; taking the right eye as an example, the upper viewing angle is usually 55°, the lower viewing angle is usually 75°, and the left viewing angle is usually 60°, the viewing angle on the right side is usually 100°.
  • Visual angle of both eyes 1 The visual angle when both eyes look straight ahead, the eyeballs cannot turn, and the head cannot turn forward.
  • the upper and lower viewing angles are usually 120° in total, and the left and right viewing angles are usually 200° in total
  • Binocular viewing angle 2 The viewing angle when the two eyes look straight ahead, the eyeballs cannot turn, and the head can turn.
  • the upper and lower viewing angles are usually 130° in total, and the left and right viewing angles are usually 200° in total
  • Monocular viewing angle 3 The viewing angle when the eyeball can rotate and the head cannot rotate forward. Taking the right eye as an example, the upper viewing angle is usually 70°, the lower viewing angle is usually 80°, the left viewing angle is usually 65°, and the right viewing angle is usually 115°.
  • Binocular viewing angle 3 the viewing angle when the eyeballs can turn and the head cannot turn forward.
  • the upper and lower viewing angles are usually 150° in total, and the left and right viewing angles are usually 230° in total.
  • the viewer usually looks at the writing on the blackboard in a more comfortable posture during the meeting, that is, the eyeballs can rotate and the head basically does not rotate; then the viewer's monocular horizontal viewing angle The maximum is 180°, and the horizontal viewing angle of both eyes is up to 230°.
  • the human eye usually only objects within the middle 124° viewing angle have a three-dimensional effect (such as the middle area between X1 and X2 in the figure); and human vision is usually a sensitive area within 10° , the information can be correctly identified within 10°-20°, and it is more sensitive to dynamic things within 20°-30°.
  • the vertical viewing angle of the image is 20° and the horizontal viewing angle is 36°, the viewer usually It has a good sense of visual presence, and does not cause fatigue due to frequent eye movement.
  • Step S620 Determine the projection area of the viewer's viewing area on the display device.
  • Step S630 determining the projection area as the viewable area corresponding to the viewer.
  • vertex A is taken as the equivalent viewpoint, and in the case of no occlusion, the projection area of the horizontal and vertical viewing zones on the whiteboard plane is SQ; furthermore, The projection area SQ may be determined as the viewable area corresponding to the viewer.
  • line-of-sight obstructions 500 between the viewer 300 and the display device 100 , such as the speaker's body, other viewers' bodies, tables and chairs, and computers.
  • FIG. 12 if there is no visual obstruction, a straight line can be determined between the viewer's three-dimensional pixel point and the display unit of the display device; if there is a visual obstruction 500, the viewer's three-dimensional pixel point and the display device There will be an intersection point on the straight line between the two points of the display unit; furthermore, it can be judged whether there is a line-of-sight obstacle between the viewer 300 and the display device 100, and the corresponding visual area of the viewer can be determined according to the line-of-sight obstacle .
  • the visible area corresponding to the viewer in the display device may be determined through the following steps S1310 to S1340. in:
  • Step S1310 Determine the viewer's viewing area according to the viewer's spatial position information and viewing angle. This step is similar to the above step S610, so it will not be repeated here.
  • Step S1320 Determine the first projection area SQ of the viewer's viewing area on the display device. This step is similar to the above step S620, so it will not be repeated here.
  • Step S1330 determining a second projection area on the display device of the view obstructor located between the viewer and the display device. For example, similar to the above-mentioned step S320, in this exemplary embodiment, the image of the visual obstruction and the corresponding depth information can be collected, and based on the image of the visual obstruction and the corresponding depth information, it can be determined that the visual obstruction is located in the three-dimensional space coordinate system. spatial location information. Furthermore, the second projection area SZ of the sight-line obstacle 500 on the display device 100 is calculated by taking the equivalent visible point of the viewer as a virtual point light source.
  • Step S1340 determining an area in the first projection area that does not overlap with the second projection area as the viewable area corresponding to the viewer.
  • the viewable area corresponding to the viewer is the first projection area SQ-the second projection area SZ.
  • the viewer's corresponding viewable area is the first projection area SQ minus the area where the second projection area overlaps the first projection area ; This is not specifically limited in this exemplary embodiment.
  • the viewable area S corresponding to the viewer shown in FIG. 14 can be expressed as:
  • step S220 determine the writing area of the display device where the writer is writing, and determine a display area outside the writing area in the display device as a candidate display area.
  • the content written by the writer on the display device may be acquired first.
  • the handwriting of a writer is detected, and all handwriting is used as the writing content.
  • the smallest rectangular area capable of enclosing the written content is determined as the writing area.
  • other minimum convex polygonal areas such as trapezoids, regular hexagons, etc.
  • circular areas that can surround the writing content can also be determined as the writing area, which also belongs to this invention. public protection.
  • the writing area may also be a fixed area in the display device, so that the area can be acquired directly without re-determining.
  • the writing area can also be assisted by the writer to determine, for example, some areas (with written content or no written content) can be delineated by the writer on the display device as the above-mentioned writing area.
  • the specific manner of determining the writing area is not specifically limited in this exemplary embodiment.
  • step S230 according to the visible area, a target display area corresponding to the writing area is determined for the viewer in the candidate display area.
  • a target display area corresponding to the writing area may be determined for the viewer in the candidate display area through the following steps S1610 to S1640 . in:
  • Step S1610 dividing the candidate display area into a plurality of sub-candidate display areas.
  • the display device may be divided into multiple sub-regions in advance, and then after the candidate display regions are determined, the sub-regions contained in the candidate display regions are determined as sub-candidate display regions contained in the candidate display regions; that is, based on the The division result of the display device obtains the division result of the candidate display area. It is also possible to divide the candidate display regions after determining the candidate display regions to obtain the above-mentioned plurality of sub-candidate display regions; this is not specifically limited in this exemplary embodiment.
  • the specific division rule may be specifically determined according to the attribute of the display device, which is also not specifically limited in this exemplary embodiment.
  • Step S1620 according to the size of the target display area, determine the number N of sub-candidate display areas to be included in the target display area.
  • the size of the target display area may be determined first according to the size of the writing area; wherein, the size of the target display area is not smaller than the size of the writing area, and the size of the target display area is the Integer multiples of the sub-candidate display area. For example, if the size of the writing area is 400 ⁇ 200 sub-pixels, the size of the sub-candidate display area is 240 ⁇ 120 sub-pixels; since the size of the target display area is an integer multiple of the sub-candidate display area, then The target display area needs to include 4 sub-candidate display areas, and the size of the target display area is 480 ⁇ 240 sub-pixels.
  • the writing area may also be enlarged by a preset multiple, and then the size of the target display area may be determined according to the size of the enlarged writing area.
  • the size of the writing area is 400 ⁇ 200 sub-pixels.
  • the writing area can be enlarged by X times; wherein, X can be a positive number greater than 1, such as 1.5, 2 , 4 etc.
  • the size of the target display area needs to be larger than the size of 600 ⁇ 300 sub-pixels; since the size of the above-mentioned sub-candidate display area is 240 ⁇ 120 sub-pixels, and the size of the target display area is the size of the sub-candidate Integer multiples of the display area, the target display area needs to include 9 sub-candidate display areas, and the size of the target display area is 720 ⁇ 360 sub-pixels.
  • the number N of sub-candidate display areas to be included in the target display area may also be determined in other ways, which is not limited in this exemplary embodiment.
  • Step S1630 evaluating each of the sub-candidate display regions according to the visible region.
  • each of the sub-candidate display regions may be evaluated through the following steps S1710 to S1730. in:
  • Step S1710 Determine the sub-candidate display area corresponding to each viewer according to the viewable area corresponding to each viewer. For example, referring to FIG. 18 , assuming that according to the sub-pixels included in the visible area determined in step 1340 above, it is obtained that the corresponding visible area 301 corresponding to viewer A includes sub-candidate display areas 2, 3, 8, and 9, The corresponding viewable area 302 corresponding to viewer B includes sub-candidate display areas 3 , 4 , 9 , and 10 , and the corresponding viewable area 303 corresponding to viewer C includes sub-candidate display areas 9 , 10 , 15 , and 16 .
  • sub-candidate display areas that are not completely included in the viewer's viewable area they can be regarded as sub-candidate display areas that are not included in the viewer's viewable area.
  • Candidate display areas for example, viewer A's viewable area 301 does not include sub-candidate display areas 1, 7, 13, 14, 15, and viewer B's viewable area 302 does not include sub-candidate display areas 5, 11, 15, 16, 17, etc.; but in some exemplary embodiments, for sub-candidate display areas that are not completely contained in the viewer's viewable area, they can also be included in the viewing area according to certain rules (such as setting coefficients according to the area)
  • the sub-candidate display areas included in the visible area of the user are not specifically limited in this exemplary embodiment.
  • Step S1720 for each of the sub-candidate display areas, determine the number of occurrences of the sub-candidate display areas in the sub-candidate display areas corresponding to each of the viewers. For example, continuing to refer to FIG. 18 , in which sub-candidate display areas 3 appear in viewer A's viewable area 301 and viewer B's viewable area 302 respectively; sub-candidate display areas 9 appear in viewer A's viewable area 3 respectively. Viewable area 301 of viewer B, viewer B's viewable area 302, and viewer C's viewable area 303; sub-candidate display areas 10 appear in viewer B's viewable area 302 and viewer C's viewable area 303, etc. .
  • the number of occurrences of the sub-candidate display areas 2, 4, 8, 15, and 16 in the sub-candidate display areas corresponding to all the viewers is 1;
  • the number of occurrences of the sub-candidate display area is 2;
  • the number of occurrences of the sub-candidate display area 9 in all the sub-candidate display areas corresponding to the viewer is 3.
  • Step S1730 Determine the evaluation result of the sub-candidate display area according to the number of occurrences. For example, continuing to refer to FIG. 18, if the sub-candidate display area 9 appears the most, a higher evaluation score can be set for the sub-candidate display area 9; correspondingly, the sub-candidate display areas 2, 4, 8, 15 , 16 appear less frequently, then lower evaluation scores can be set for these sub-candidate display regions.
  • the number of occurrences of the sub-candidate display regions can be directly set as the evaluation score of the sub-candidate display regions; , 16 with an evaluation score of 1.
  • Step S1640 according to the evaluation result of the evaluation, select N adjacent candidate display regions as the target display regions.
  • Sub-candidate display area 9; and the adjacent 3 sub-candidate display areas comprising sub-candidate display area 9 are respectively sub-candidate display areas 7, 8, 9, sub-candidate display areas 8, 9, 10, sub-candidate display areas Areas 9, 10, 11 are displayed.
  • the total evaluation score of sub-candidate display areas 7, 8, and 9 is 4, the total evaluation score of sub-candidate display areas 8, 9, and 10 is 6, and the total evaluation score of sub-candidate display areas 9, 10, and 11 is 5. Therefore, the sub-candidate display areas 8, 9, 10 can be used as target display areas.
  • the adjacent N sub-candidate display regions may also be selected as the target display regions in other ways; for example, directly select The N sub-candidate display regions with the highest total evaluation scores are used as the target display regions and the like.
  • the sub-candidate display regions may also be evaluated in other ways, for example, the sub-candidate display regions are evaluated according to the positions of the sub-candidate display regions in the viewable regions of the viewers. Evaluation, etc.; these all belong to the protection scope of the present disclosure.
  • step S240 the writing content of the writing area is displayed in the target display area.
  • the written content can be directly displayed in the target display area; the written content can also be processed and then displayed in the target display area; for example, the handwriting of the written content can be deepened and optimized Or after alignment and other processing, it is displayed in the target display area.
  • the above-mentioned viewer may also be determined according to a viewer selection instruction of a writer or other users. That is, in these exemplary embodiments, not all users who watch the display device may be the above-mentioned viewers, but only designated users who watch the display device are the above-mentioned viewers.
  • the viewer can be determined according to the professional direction, degree of interest, role, etc. of the user watching the display device; the viewer can also be determined according to the writer or other users according to the issues that need to be discussed and the people who need to interact; and then through
  • the method in this patent application mainly determines the target display area for the viewer, and displays the writing content in the writing area in the target display area. In this way, the interaction between the speaker and the designated viewer can be more convenient, and user experience and communication efficiency can be improved.
  • the content display device 1900 may include a visible area determination module 1910 , a first display area determination module 1920 , a second display area determination module 1930 and a content display module 1940 . in:
  • the visible area determination module 1910 can be used to determine the visible area corresponding to the viewer in the display device according to the viewer's spatial position information and viewing angle; the first display area determination module 1920 can be used to determine the writer's The writing area of the display device, and determining a display area outside the writing area in the display device as a candidate display area; the second display area determination module 1930 can be used to, according to the visible area, in the In the candidate display area, a target display area corresponding to the writing area is determined for the viewer; the content display module 1940 may be configured to display the writing content of the writing area in the target display area.
  • an electronic device including: a processor; a memory configured to store processor-executable instructions; described method.
  • Fig. 20 is a schematic structural diagram of a computer system for realizing the electronic device of the embodiment of the present disclosure. It should be noted that the computer system 2000 of the electronic device shown in FIG. 20 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • a computer system 2000 includes a central processing unit 2001 that can perform various appropriate actions and processes according to programs stored in a read-only memory 2002 or programs loaded from a storage section 2008 into a random access memory 2003 .
  • random access memory 2003 various programs and data necessary for system operation are also stored.
  • the CPU 2001 , the read only memory 2002 and the random access memory 2003 are connected to each other through a bus 2004 .
  • An input/output interface 2005 is also connected to the bus 2004 .
  • the following components are connected to the input/output interface 2005: an input section 2006 including a keyboard, a mouse, etc.; an output section 2007 including a speaker, etc., such as a cathode ray tube (CRT), a liquid crystal display (LCD), etc.; a storage section 2008 including a hard disk, etc. and a communication section 2009 including a network interface card such as a local area network (LAN) card, a modem, or the like.
  • the communication section 2009 performs communication processing via a network such as the Internet.
  • a drive 2010 is also connected to the input/output interface 2005 as necessary.
  • a removable medium 2011, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 2010 as necessary so that a computer program read therefrom is installed into the storage section 2008 as necessary.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts.
  • the computer program may be downloaded and installed from a network via the communication part 2009 and/or installed from a removable medium 2011.
  • the central processing unit 2001 various functions defined in the device of the present application are performed.
  • a non-volatile computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a computer, the computer executes any one of the methods described above.
  • non-volatile computer-readable storage medium shown in the present disclosure may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any of the above combination. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more conductors, portable computer diskettes, hard disks, random access memory, read-only memory, erasable programmable read-only memory (EPROM) or flash memory), fiber optics, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wires, optical cables, radio frequency, etc., or any suitable combination of the above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本公开涉及显示技术领域,尤其涉及一种内容显示方法、内容显示装置、存储介质及电子设备。该内容显示方法包括:根据观看者的空间位置信息以及可视角度,确定显示设备中与所述观看者对应的可视区域;确定书写者在所述显示设备的书写区域,并将所述显示设备中所述书写区域之外的显示区域确定为候选显示区域;根据所述可视区域,在所述候选显示区域中为所述观看者确定与所述书写区域对应的目标显示区域;在所述目标显示区域内,显示所述书写区域的书写内容。该方法能够在一定程度上提升观看者的听讲效果。

Description

内容显示方法、内容显示装置、存储介质及电子设备 技术领域
本公开涉及显示技术领域,尤其涉及一种内容显示方法、内容显示装置、计算机可读存储介质及电子设备。
背景技术
目前,已经有越来越多的显示设备具有了书写功能。但由于书写者的身高、臂长等因素的限制,书写者往往只能在显示设备的部分区域进行书写;同时,观看者往往也是在固定的位置进行观看。
进而,观看者座位前方的其他观看者以及显示设备附近书写者等因素都可能会干扰到观看者观看书写内容,进而导致观看者的听讲效果降低,影响用户体验。
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本公开提供一种内容显示方法、内容显示装置、电子设备以及计算机可读存储介质,从而至少在一定程度上确保书写内容顺畅的传达给观看者,提升观看者的听讲效果。
根据本公开的一个方面,提供一种内容显示方法,包括:根据观看者的空间位置信息以及可视角度,确定显示设备中与所述观看者对应的可视区域;确定书写者在所述显示设备的书写区域,并将所述显示设备中所述书写区域之外的显示区域确定为候选显示区域;根据所述可视区域,在所述候选显示区域中为所述观看者确定与所述书写区域对应的目标显示区域;在所述目标显示区域内,显示所述书写区域的书写内容。
在本公开的一种示例性实施例中,所述方法还包括:获取场景的深度图像,根据所述深度图像确定所述场景的点云数据,并根据所述点云数据创建三维空间坐标系;采集所述观看者的位姿信息,并根据所述观看者的位姿信息确定所述观看者在所述三维空间坐标系中的所述空间位置信息。
在本公开的一种示例性实施例中,所述确定显示设备中与所述观看者对应的可视区域,包括:根据观看者的空间位置信息以及可视角度,确定所述观看者的视域;确定所述观看者的视域在在所述显示设备上的投影区域;将所述投影区域确定为所述观看者对应的可视区域。
在本公开的一种示例性实施例中,所述确定显示设备中与所述观看者对应的可视区域,包括:根据观看者的空间位置信息以及可视角度,确定所述观看者的视域;确 定所述观看者的视域在所述显示设备上的第一投影区域;确定位于所述观看者与所述显示设备之间的视线阻碍物在所述显示设备上的第二投影区域;将所述第一投影区域中与所述第二投影区域不重叠的区域确定为所述观看者对应的可视区域。
在本公开的一种示例性实施例中,所述确定书写者在所述显示设备的书写区域,包括:获取所述书写者在所述显示设备中书写内容;将能够包围所述书写内容的最小矩形区域确定为所述书写区域。
在本公开的一种示例性实施例中,所述在所述候选显示区域中为所述观看者确定与所述书写区域对应的目标显示区域,包括:将所述候选显示区域划分为多个子候选显示区域;根据目标显示区域的尺寸确定目标显示区域需要包含的子候选显示区域数量N;根据所述可视区域对各所述子候选显示区域进行评价;根据所述评价的评价结果,选择相邻的N个所述子候选显示区域作为所述目标显示区域。
在本公开的一种示例性实施例中,所述方法还包括:根据所述书写区域的尺寸确定目标显示区域的尺寸;其中,所述目标显示区域的尺寸不小于所述书写区域的尺寸,且所述目标显示区域的尺寸为所述子候选显示区域的整数倍。
在本公开的一种示例性实施例中,所述根据所述书写区域的尺寸确定目标显示区域的尺寸,包括:将所述书写区域放大预设倍数;根据放大后的书写区域的尺寸确定目标显示区域的尺寸。
在本公开的一种示例性实施例中,所述观看者的数量为多个;所述根据所述可视区域对各所述子候选显示区域进行评价,包括:根据各所述观看者对应的可视区域确定各所述观看者对应的子候选显示区域;对于每一所述子候选显示区域,确定该所述子候选显示区域在各所述观看者对应的子候选显示区域的出现次数;根据所述出现次数,确定该所述子候选显示区域的评价结果。
在本公开的一种示例性实施例中,所述方法还包括:根据观看者选择指令,确定所述观看者。
根据本公开的一个方面,提供一种内容显示装置,包括:可视区域确定模块,用于根据观看者的空间位置信息以及可视角度,确定显示设备中与所述观看者对应的可视区域;第一显示区域确定模块,用于确定书写者在所述显示设备的书写区域,并将所述显示设备中所述书写区域之外的显示区域确定为候选显示区域;第二显示区域确定模块,用于根据所述可视区域,在所述候选显示区域中为所述观看者确定与所述书写区域对应的目标显示区域;内容显示模块,用于在所述目标显示区域内,显示所述书写区域的书写内容。
根据本公开的一个方面,提供一种电子设备,包括:处理器;以及,存储器,用于存储一个或多个程序,当所述一个或多个程序被所述处理器执行时,使得所述处理器实现如本公开一些方面提供的所述的方法。
根据本公开的一个方面,提供一种计算机可读存储介质,其上存储有计算机程序, 该程序被处理器执行时,实现如本公开一些方面提供的所述的方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本公开实施例中内容显示方法的一种应用场景示意图。
图2示出了本公开实施例中内容显示方法的一种流程示意图。
图3示出了本公开实施例中确定观看者空间位置信息的流程示意图。
图4示出了本公开实施例中一种三维空间坐标系示意图。
图5示出了本公开实施例中一种观看者对应的位姿信息确定示意图。
图6示出了本公开实施例中确定观看者可视区域的流程示意图。
图7示出了本公开实施例中内容显示方法的一种应用场景示意图。
图8示出了本公开实施例中一种人眼水平方向可视角度示意图。
图9示出了本公开实施例中一种人眼垂直方向可视角度示意图。
图10示出了本公开实施例中观看者可视区域示意图。
图11示出了本公开实施例中内容显示方法的一种应用场景示意图。
图12示出了本公开实施例中内容显示方法的一种应用场景示意图。
图13示出了本公开实施例中确定观看者可视区域的流程示意图。
图14示出了本公开实施例中观看者可视区域示意图。
图15示出了本公开实施例中显示设备示意图。
图16示出了本公开实施例中确定目标显示区域的流程示意图。
图17示出了本公开实施例中确定子候选显示区域评价结果的流程示意图。
图18示出了本公开实施例中显示设备示意图。
图19示出了本公开实施例中内容显示装置的一种模块示意图。
图20示出了用于实现本公开实施例的电子设备的计算机系统的结构示意图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记 表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
需要说明的是,本公开中,用语“包括”、“配置有”、“设置于”用以表示开放式的包括在内的意思,并且是指除了列出的要素/组成部分/等之外还可存在另外的要素/组成部分/等。
图1示出了可以应用本公开实施例的内容显示方法的示例性应用场景的示意图。在该应用场景中,至少包括显示设备100、书写者200和观看者300。本示例实施方式中,所述显示设备100可以为电子白板;电子白板是一种具有大尺寸显示屏幕的交互式显示装置;通过特定的触控笔,可以在电子白板上进行书写操作或者进行编辑、注释、保存等其他操作;电子白板可以应用在会议展示或教学环境中,供使用者进行演示或讲解。当然,在本公开的其他示例性实施例中,所述显示设备100也可以为其他能够实现书写功能以及显示功能的显示装置,本示例性实施例中并不以此为限。此外,显示设备100还可以具有具备运算处理功能的机构,例如,包括内置的用于实现运算处理功能的处理器,或者外部的用于实现运算处理功能服务器等。在该应用场景中,观看者300可以是参会人员、学生、观众等;书写者200可以是参会人员、讲师、演示者等。书写者200可以在显示设备100上书写需要展示的内容,例如要点重点、方程式、流程图、示意图或者其他内容;观看者300的视线会跟踪演讲人的笔迹,观看书写内容。
继续参考图1所示,由于书写者200的身高、臂长等因素的限制,书写者200往往只能在显示设备的部分区域进行书写;同时,观看者300往往也是在固定的位置(如会议桌400的周边)进行观看;而且,观看者座位前方的其他观看者以及显示设备100附近的书写者200等因素都可能会干扰到观看者观看书写内容,进而导致观看者的听讲效果降低,影响用户体验。
参照图2中所示,针对上述至少部分上述问题,本示例实施方式中提供了一种内容显示方法;该方法可以由上述显示设备内置或者外接的具备运算处理功能的机构执行。在本示例性实施例中提供的内容显示方法,可以包括以下步骤:
步骤S210、根据观看者的空间位置信息以及可视角度,确定显示设备中与所述观看者对应的可视区域。
步骤S220、确定书写者在所述显示设备的书写区域,并将所述显示设备中所述书写区域之外的显示区域确定为候选显示区域。
步骤S230、根据所述可视区域,在所述候选显示区域中为所述观看者确定与所述书写区域对应的目标显示区域。
步骤S240、在所述目标显示区域内,显示所述书写区域的书写内容。
基于本公开示例实施方式所提供的内容显示方法,一方面,由于观看者除了直接观看 书写区域(如图1所示的书写区域120)之外,还可以通过目标显示区域(如图1所示的目标显示区域110)来观看书写内容,因此即使书写区域被遮挡,也不会影响到观看者观看书写内容,提高了信息传达的效率,提升了观看者的听讲效果;另一方面,由于目标显示区域是结合观看者的空间位置信息以及可视角度确定的,因此,能够更加方便和更加适合观看者观看目标显示区域,进而提高了观看者的观看体验。再一方面,即使书写者由于身高或者其他因素的限制,仅能够在显示设备的小部分区域书写,但书写的内容也可以顺利的传达给观看者,因此,可以方便书写者不受限制的在显示设备的各区域进行书写。其中观看者空间位置信息例如包括,观看者所处的空间位置以及观看者的某些具体特征所处的空间位置,比如观看者的头部或者眼睛所处的空间位置。
下面,将结合附图及实施例对本示例性实施例中的内容显示方法的各个步骤进行更详细的说明。
在步骤S210中,根据观看者的空间位置信息以及可视角度,确定显示设备中与所述观看者对应的可视区域。
本示例实施方式中,根据应用场景的不同,可以通过多种不同的方式确定观看者的空间位置信息。例如,如果应用场景为观看者的位置相对固定(如座位和观看者位次都长期不变)的场所,则可以预先标定观看者的空间位置信息,进而在需要使用时直接获取预先标定观看者的空间位置信息即可。又例如,如果应用场景为观看者的位置可能发生变化的场所,如会议室、教室等,则可以实时或者周期性的获取观看者的空间位置信息;还可以在检测到观看者的空间位置信息发生变化之后,才获取观看者的空间位置信息;本示例性实施例中对此不做特殊限定。
举例而言,参考图3所示,可以通过下述步骤S310以及步骤S320确定观看者的空间位置信息。
步骤S310、获取场景的深度图像,根据所述深度图像确定所述场景的点云数据,并根据所述点云数据创建三维空间坐标系。
本示例实施方式中,可以利用三维扫描设备,例如激光雷达、立体摄像头、越渡时间相机等,对场景进行扫描,从而获取场景中各对象(如显示设备、会议桌、座椅、墙体、演讲台等)的图像以及对应的深度信息。进而,可以对场景中各对象的图像进行特征提取得到特征信息,并利用特征信息进行视觉跟踪和运动估计,得到中间结果;接着,利用各对象的图像对应的深度信息以及三维扫描设备的内部参数得到局部点云数据;最后,利用上述中间结果和局部点云数据,生成全局点云数据,从而基于全局点云数据构建出当前场景对应的三维空间坐标系;例如,当前场景对应的三维空间坐标系可以是如图4所示的形式。
步骤S320、采集所述观看者的位姿信息,并根据所述观看者的位姿信息确定所述观看者在所述三维空间坐标系中的所述空间位置信息。
本示例实施方式中,同样可以利用三维扫描设备,例如激光雷达、立体摄像头、越渡 时间相机等,对观看者进行扫描,从而获取各观看者的图像以及对应的深度信息。基于各观看者的图像以及对应的深度信息,可以获取观看者的位姿信息。当然,在一些示例性实施例中,也可以仅获取各观看者的图像,然后通过表面重建等方法将其转换为多边形或者三角形等模型,进而获取观看者的位姿信息。在获取获取观看者的位姿信息之后,可以结合如ICP(Iterative Closest Point,迭代最近点)等算法确定观看者在上述三维空间坐标系中位置。参考图5所示,每个观看者对应的位姿信息可以表示为N行和3列(根据传感器的不同,列数也可以为其他数量)的数组,每行对应于单个点,在三维空间坐标系的位置表示为(x,y,z)。但容易理解的是,也可以采用其他方法确定观看者在上述三维空间坐标系中位置,这同样属于本公开的保护范围。
参考图6所示,本示例实施方式中,可以通过下述步骤S610至步骤S630确定显示设备中与所述观看者对应的可视区域。其中:
步骤S610、根据观看者的空间位置信息以及可视角度,确定所述观看者的视域。
人眼可以同时看到的前方物体的角度,称为可视角度。可视角度从小到大,可分为以下几类:
单眼可视角度一:一只眼睛观看正前方,眼球不可转动,头部向前方不可转动时的可视角度。以右眼为例,上方可视角度通常为50°,下方可视角度通常为70°,左侧可视角度通常为56°,右侧可视角度通常为100°。
单眼可视角度二:一只眼睛观看正前方,眼球不可转动,头部可以转动时的可视角度。为了更完整地表现眼球视觉范围,可以把眼眶、鼻子的遮挡去除;则以右眼为例,上方可视角度通常为55°,下方可视角度通常为75°,左侧可视角度通常为60°,右侧可视角度通常为100°。
双眼可视角度一:双眼观看正前方,眼球不可转动,头部向前方不可转动时的可视角度。上下方可视角度通常共120°,左右侧可视角度通常共200°
双眼可视角度二:双眼观看正前方,眼球不可转动,头部可以转动时的可视角度。上下方可视角度通常共130°,左右侧可视角度通常共200°
单眼可视角度三:眼球可以转动,头部向前方不可转动时的可视角度。以右眼为例,上方可视角度通常为70°,下方可视角度通常为80°,左侧可视角度通常为65°,右侧可视角度通常为115°。
双眼可视角度三:眼球可以转动,头部向前方不可转动时的可视角度。上下方可视角度通常共150°,左右侧可视角度通常共230°。
参考图7和图8所示,以会议场景为例,在会议中观看者通常会以较舒适的姿态观看板书,即眼球可转动,头部基本不转动;则观看者的单眼水平可视角度最大达180°,双眼的水平可视角度最大达230°。但在人眼观看到的范围内,通常只有中间124°视角内的物体具有有立体感(如位于图中X1和X2之间的中间区域);且人眼视觉通常在10°内是敏感区,10°-20°内能够正确识别信息,20°-30°内对动态东西较为敏感,当图像的垂直方向可视 角度为20°,水平方向的可视角度为36°时,观看者通常具有较好的视觉临场感,而且也不因为频繁转动眼球造成疲倦。
基人眼视觉的上述特点,参考图8所示,本示例实施方式中,按照上述“双眼可视角度三”,选取∠a=∠b=115°为左眼和右眼的水平视角,选取双眼重合视域∠c=124°来构建观看者的眼部模型。以∠a和∠b的交点A为顶点、双眼连线的中点B连成射线BA,并以射线BA为0°线,左右分别做62°角(即图中所示的∠e和∠f),则射线AC、射线AD在124°范围内为水平视域。类似的,参考图9所示,在垂直方向,上角度75°、下角度75°范围内为垂直视域。
步骤S620、确定所述观看者的视域在在所述显示设备上的投影区域。
步骤S630、将所述投影区域确定为所述观看者对应的可视区域。
参考图10所示,在观看者的眼部模型中,以顶点A作为等效视点,在无遮挡的情况下,水平视域与垂直视域在白板平面上的投影区域则为SQ;进而,可以将投影区域SQ确定为所述观看者对应的可视区域。
参考图11所示,在实际场景中,观看者300和显示设备100之间经常还会存在视线阻碍物500,例如演讲者身体、其他观看者身体、桌椅、电脑等。参考图12所示,如果不存在视线阻碍物,观看者的三维像素点与显示设备的显示单元两点之间可以确定一条直线;如果存在视线阻碍物500,观看者的三维像素点与显示设备的显示单元两点之间的直线上将会存在交叉点;进而,可以据此判断观看者300和显示设备100之间是否存在视线阻碍物,并根据视线阻碍物确定观看者对应的可视区域。
具体而言,参考图13所示,本示例实施方式中,可以通过下述步骤S1310至步骤S1340确定显示设备中与观看者对应的可视区域。其中:
步骤S1310、根据观看者的空间位置信息以及可视角度,确定所述观看者的视域。该步骤与上述步骤S610类似,因此不再重复赘述。
步骤S1320、确定所述观看者的视域在所述显示设备上的第一投影区域SQ。该步骤与上述步骤S620类似,因此不再重复赘述。
步骤S1330、确定位于所述观看者与所述显示设备之间的视线阻碍物在所述显示设备上的第二投影区域。例如,类似于上述步骤S320,本示例实施方式中可以采集视线阻碍物的图像以及对应的深度信息,并基于视线阻碍物的图像以及对应的深度信息确定视线阻碍物在所述三维空间坐标系中的空间位置信息。进而,以观看者的等效可视点作为虚拟点光源,计算视线阻碍物500在显示设备100上的第二投影区域SZ。
步骤S1340、将所述第一投影区域中与所述第二投影区域不重叠的区域确定为所述观看者对应的可视区域。例如,参考图14所示,观看者对应的可视区域为第一投影区域SQ-第二投影区域SZ。但在一些示例性实施例中,如果第二投影区域并非完全位于第一投影区域,观看者对应的可视区域则为第一投影区域SQ减去第二投影区域与第一投影区域重叠的区域;本示例性实施例中对此不做特殊限定。
进一步的,为了量化观看者对应的可视区域的大小,本示例实施方式中可以将所述显示设备划分为多个阵列分布的显示单元;每个显示单元包含相同数量的子像素。如图15所示,以每个显示单元均包含一个子像素为例,可以根据显示设备的分辨率将显示设备划分为m行n列的显示单元。例如,某65寸电子白板的分辨率为4320×2160,屏幕尺寸为1439mm×809.3mm,则每个显示单元的长度为1439mm/4320=0.333mm、宽为809.3mm/2160=0.3746mm、面积大小为0.333mm×0.3746mm。各显示单元可以分别按照所在行列命名为P 11,P 12…,P mn。则整个显示设备可以表示为:
Figure PCTCN2021126493-appb-000001
相对应的,在一种示例性实施例中,图14所示的观看者对应的可视区域S则可以表示为:
Figure PCTCN2021126493-appb-000002
在步骤S220中,确定书写者在所述显示设备的书写区域,并将所述显示设备中所述书写区域之外的显示区域确定为候选显示区域。
本示例实施方式中,可以首先获取所述书写者在所述显示设备中书写内容。例如,检测书写者的书写笔迹,将所有书写笔迹作为书写内容。其次,将能够包围所述书写内容的最小矩形区域确定为所述书写区域。当然,在更多示例性实施例中,也可以将能够包围所述书写内容的其他最小凸多边形区域(如梯形、正六边形等)或者圆形区域确定为所述书写区域,这同样属于本公开的保护范围。
在一些示例性实施例中,书写区域也可以为显示设备中的固定区域,这样则可以直接获取该区域而无需进行重新确定。在一些示例性实施例中,书写区域还可以由书写者进行辅助确定,例如,可以由书写者在显示设备中将某些区域(已有书写内容或者无书写内容均可)圈定出来作为上述书写区域。本示例性实施例中对于确定书写区域的具体方式不做特殊限定。
在步骤S230中,根据所述可视区域,在所述候选显示区域中为所述观看者确定与所述书写区域对应的目标显示区域。
参考图16所示,本示例实施方式中,可以通过下述步骤S1610至步骤S1640在候选显示区域中为所述观看者确定与所述书写区域对应的目标显示区域。其中:
步骤S1610、将所述候选显示区域划分为多个子候选显示区域。
本示例实施方式中,可以预先将显示设备划分为多个子区域,然后在确定候选显示区域之后,将候选显示区域包含的子区域确定为候选显示区域包含的个子候选显示区域;也即,基于对显示设备的划分结果得到所述候选显示区域的划分结果。也可以在确定候选显示区域之后,再对候选显示区域进行划分,得到上述多个子候选显示区域;本示例性实施例中对此不做特殊限定。
举例而言,本示例实施方式中可以按照预先设定好的划分规则,对显示设备进行划分;例如,按照每个子区域可以包含240×120个子像素进行划分,则上述4320×2160分辨率的显示设备将被划分为18×18=324个子区域;进而,在确定候选显示区域之后,即可将候选显示区域范围内的各子区域确定为候选显示区域包含的个子候选显示区域。其中,例如,具体的划分规则可以根据显示设备的属性具体确定,本示例性实施例中对此同样不做特殊限定。
步骤S1620、根据目标显示区域的尺寸确定目标显示区域需要包含的子候选显示区域数量N。
本示例实施方式中,可以首先根据所述书写区域的尺寸确定目标显示区域的尺寸;其中,所述目标显示区域的尺寸不小于所述书写区域的尺寸,且所述目标显示区域的尺寸为所述子候选显示区域的整数倍。举例而言,如果书写区域的尺寸为400×200个子像素大小,上述子候选显示区域的尺寸为240×120个子像素大小;由于目标显示区域的尺寸为所述子候选显示区域的整数倍,则目标显示区域需要包含4个子候选显示区域,目标显示区域的尺寸为480×240个子像素大小。
在本公开的其他示例性实施例中,还可以将所述书写区域放大预设倍数,然后根据放大后的书写区域的尺寸确定目标显示区域的尺寸。举例而言,书写区域的尺寸为400×200个子像素大小,为了使得观看者更加清晰的辨认书写内容,可以将书写区域放大X倍;其中,X可以为大于1的正数,例如1.5、2、4等。如果将书写区域放大2.25倍,则目标显示区域的尺寸需要大于600×300个子像素大小;由于上述子候选显示区域的尺寸为240×120个子像素大小,且目标显示区域的尺寸为所述子候选显示区域的整数倍,则目标显示区域需要包含9个子候选显示区域,目标显示区域的尺寸为720×360个子像素大小。
需要说明的是,在本公开的其他示例性实施例中,也可以通过其他方式确定目标显示区域需要包含的子候选显示区域数量N,本示例性实施例中并不以此为限。
步骤S1630、根据所述可视区域对各所述子候选显示区域进行评价。
参考图17所示,本示例实施方式中,可以通过下述步骤S1710至步骤S1730对各所述子候选显示区域进行评价。其中:
步骤S1710、根据各所述观看者对应的可视区域确定各所述观看者对应的子候选显示 区域。举例而言,参考图18所示,假设根据上述步骤1340中确定的可视区域包含的子像素得到:观看者甲对应的对应可视区域301包含子候选显示区域2、3、8、9,观看者乙对应的对应可视区域302包含子候选显示区域3、4、9、10,观看者丙对应的对应可视区域303包含子候选显示区域9、10、15、16。此外,为减少后续评价的运算复杂度,本示例实施方式中,对于未被完全包含在观看者的可视区域中的子候选显示区域,可以视为并非该观看者的可视区域包含的子候选显示区域;例如,观看者甲的可视区域301不包含子候选显示区域1、7、13、14、15,观看者乙的可视区域302不包含子候选显示区域5、11、15、16、17等;但在一些示例性实施例中,对于未被完全包含在观看者的可视区域中的子候选显示区域,也可以按照一定的规则(例如按照面积设置系数)计入该观看者的可视区域包含的子候选显示区域,本示例性实施例中对此不做特殊限定。
步骤S1720、对于每一所述子候选显示区域,确定该所述子候选显示区域在各所述观看者对应的子候选显示区域的出现次数。举例而言,继续参考图18所示,其中,子候选显示区域3分别出现在观看者甲的可视区域301和观看者乙的可视区域302;子候选显示区域9分别出现在观看者甲的可视区域301、观看者乙的可视区域302以及观看者丙的可视区域303;子候选显示区域10分别出现在观看者乙的可视区域302和观看者丙的可视区域303等。经过统计,则子候选显示区域2、4、8、15、16在所有所述观看者对应的子候选显示区域的出现次数为1;子候选显示区域3、10在所有所述观看者对应的子候选显示区域的出现次数为2;子候选显示区域9在所有所述观看者对应的子候选显示区域的出现次数为3。
步骤S1730、根据所述出现次数,确定该所述子候选显示区域的评价结果。举例而言,继续参考图18所示,子候选显示区域9出现的次数最多,则可以为子候选显示区域9设置较高的评价分数;相应的,子候选显示区域2、4、8、15、16出现的次数较少,则可以为这些子候选显示区域设置较低的评价分数。本示例实施方式中,可以直接将子候选显示区域出现的次数设置为子候选显示区域的评价分数;例如,子候选显示区域9的评价分数为3,子候选显示区域2、4、8、15、16的评价分数为1。但在本公开的其他示例性实施例中,也可以将子候选显示区域出现的次数转换为百分制或者十分制的分数;或者,对子候选显示区域出现的次数进行一定的加权运算之后,综合计算得到子候选显示区域的评价结果;本示例性实施例中对此不做特殊限定。
步骤S1640、根据所述评价的评价结果,选择相邻的N个所述子候选显示区域作为所述目标显示区域。
继续参考图18所示,其中,如果目标显示区域120需要包含3个子候选显示区域;则在一种示例性实施例中,由于子候选显示区域9的评价分数最高,则目标显示区域至少需要包含子候选显示区域9;而包含子候选显示区域9的相邻的3个所述子候选显示区域则分别为子候选显示区域7、8、9,子候选显示区域8、9、10,子候选显示区域9、10、11。其中,子候选显示区域7、8、9的总评价分数为4,子候选显示区域8、9、10的总评 价分数为6,子候选显示区域9、10、11的总评价分数为5。因此,可以将子候选显示区域8、9、10作为目标显示区域。
但本领域技术人员容易理解的是,在本公开的其他示例性实施例中,也可以通过其他方式方式选择相邻的N个所述子候选显示区域作为所述目标显示区域;例如,直接选择总评价分数最高的N个子候选显示区域作为所述目标显示区域等。在本公开的其他示例性实施例中,也可以通过其他方式对子候选显示区域进行评价,例如,依据子候选显示区域在各所述观看者的可视区域中的位置对子候选显示区域进行评价等;这些均属于本公开的保护范围。
在步骤S240中,在所述目标显示区域内,显示所述书写区域的书写内容。
本示例实施方式中,在确定目标显示区域之后,可以直接在目标显示区域显示书写内容;也可以对书写内容进行处理之后再显示在目标显示区域;例如,可以对书写内容的笔迹进行加深、优化或者对齐等处理之后再显示在目标显示区域。
在本公开的一些示例性实施例中,还可以根据书写者或者其他用户的观看者选择指令,确定上述观看者。也即,在这些示例性实施例中,可能并非所有观看显示设备的用户都是上述观看者,而仅有被指定的观看显示设备的用户才是上述观看者。例如,可以根据观看显示设备的用户的专业方向、兴趣度、角色等,确定观看者;也可以由根据书写者或者其他用户根据需要讨论的问题、需要互动的人员来来确定观看者;进而通过本专利申请中的方法,主要为观看者确定目标显示区域内,并在目标显示区域内显示书写区域的书写内容。这样,则可以更加方便演讲人等与被指定的观看者的之间的互动,提高用户体验及沟通效率。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
进一步的,本示例实施方式中,还提供了一种内容显示装置。参考图19所示,该内容显示装置1900可以包括可视区域确定模块1910、第一显示区域确定模块1920、第二显示区域确定模块1930以及内容显示模块1940。其中:
可视区域确定模块1910可以用于根据观看者的空间位置信息以及可视角度,确定显示设备中与所述观看者对应的可视区域;第一显示区域确定模块1920可以用于确定书写者在所述显示设备的书写区域,并将所述显示设备中所述书写区域之外的显示区域确定为候选显示区域;第二显示区域确定模块1930可以用于根据所述可视区域,在所述候选显示区域中为所述观看者确定与所述书写区域对应的目标显示区域;内容显示模块1940可以用于在所述目标显示区域内,显示所述书写区域的书写内容。
上述内容显示装置中各模块的具体细节已经在对应的内容显示方法中进行了详细的描述,因此此处不再赘述。
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
本公开的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。
在本公开的示例性实施例中,还提供一种电子设备,包括:处理器;被配置为存储处理器可执行指令的存储器;其中,处理器被配置为执行本示例实施方式中任一所述的方法。
图20出了用于实现本公开实施例的电子设备的计算机系统的结构示意图。需要说明的是,图20示出的电子设备的计算机系统2000仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图20所示,计算机系统2000包括中央处理器2001,其可以根据存储在只读存储器2002中的程序或者从存储部分2008加载到随机访问存储器2003中的程序而执行各种适当的动作和处理。在随机访问存储器2003中,还存储有系统操作所需的各种程序和数据。中央处理器2001、只读存储器2002以及随机访问存储器2003通过总线2004彼此相连。输入/输出接口2005也连接至总线2004。
以下部件连接至输入/输出接口2005:包括键盘、鼠标等的输入部分2006;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分2007;包括硬盘等的存储部分2008;以及包括诸如局域网(LAN)卡、调制解调器等的网络接口卡的通信部分2009。通信部分2009经由诸如因特网的网络执行通信处理。驱动器2010也根据需要连接至输入/输出接口2005。可拆卸介质2011,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器2010上,以便于从其上读出的计算机程序根据需要被安装入存储部分2008。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分2009从网络上被下载和安装,和/或从可拆卸介质2011被安装。在该计算机程序被中央处理器2001执行时,执行本申请的装置中限定的各种功能。
在本公开的示例性实施例中,还提供一种非易失性计算机可读存储介质,其上存储有计算机程序,计算机程序被计算机执行时,计算机执行上述任意一项所述的方法。
需要说明的是,本公开所示的非易失性计算机可读存储介质例如可以是—但不限于—电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算 机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器、只读存储器、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、射频等等,或者上述的任意合适的组合。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。

Claims (13)

  1. 一种内容显示方法,其特征在于,包括:
    根据观看者的空间位置信息以及可视角度,确定显示设备中与所述观看者对应的可视区域;
    确定书写者在所述显示设备的书写区域,并将所述显示设备中所述书写区域之外的显示区域确定为候选显示区域;
    根据所述可视区域,在所述候选显示区域中为所述观看者确定与所述书写区域对应的目标显示区域;
    在所述目标显示区域内,显示所述书写区域的书写内容。
  2. 根据权利要求1所述的内容显示方法,其特征在于,所述确定显示设备中与所述观看者对应的可视区域,包括:
    根据观看者的空间位置信息以及可视角度,确定所述观看者的视域;
    确定所述观看者的视域在在所述显示设备上的投影区域;
    将所述投影区域确定为所述观看者对应的可视区域。
  3. 根据权利要求1所述的内容显示方法,其特征在于,所述确定显示设备中与所述观看者对应的可视区域,包括:
    根据观看者的空间位置信息以及可视角度,确定所述观看者的视域;
    确定所述观看者的视域在所述显示设备上的第一投影区域;
    确定位于所述观看者与所述显示设备之间的视线阻碍物在所述显示设备上的第二投影区域;
    将所述第一投影区域中与所述第二投影区域不重叠的区域确定为所述观看者对应的可视区域。
  4. 根据权利要求1所述的内容显示方法,其特征在于,所述确定书写者在所述显示设备的书写区域,包括:
    获取所述书写者在所述显示设备中书写内容;
    将能够包围所述书写内容的最小矩形区域确定为所述书写区域。
  5. 根据权利要求1~4任一项所述的内容显示方法,其特征在于,所述在所述候选显示区域中为所述观看者确定与所述书写区域对应的目标显示区域,包括:
    将所述候选显示区域划分为多个子候选显示区域;
    根据目标显示区域的尺寸确定目标显示区域需要包含的子候选显示区域数量N;
    根据所述可视区域对各所述子候选显示区域进行评价;
    根据所述评价的评价结果,选择相邻的N个所述子候选显示区域作为所述目标显示区域。
  6. 根据权利要求5所述的内容显示方法,其特征在于,所述方法还包括:
    根据所述书写区域的尺寸确定目标显示区域的尺寸;
    其中,所述目标显示区域的尺寸不小于所述书写区域的尺寸,且所述目标显示区域的尺寸为所述子候选显示区域的整数倍。
  7. 根据权利要求6所述的内容显示方法,其特征在于,所述根据所述书写区域的尺寸确定目标显示区域的尺寸,包括
    将所述书写区域放大预设倍数;
    根据放大后的书写区域的尺寸确定目标显示区域的尺寸。
  8. 根据权利要求5所述的内容显示方法,其特征在于,所述观看者的数量为多个;所述根据所述可视区域对各所述子候选显示区域进行评价,包括:
    根据各所述观看者对应的可视区域确定各所述观看者对应的子候选显示区域;
    对于每一所述子候选显示区域,确定该所述子候选显示区域在各所述观看者对应的子候选显示区域的出现次数;
    根据所述出现次数,确定该所述子候选显示区域的评价结果。
  9. 根据权利要求1~4或6~8任一项所述的内容显示方法,其特征在于,所述方法还包括:
    获取场景的深度图像,根据所述深度图像确定所述场景的点云数据,并根据所述点云数据创建三维空间坐标系;
    采集所述观看者的位姿信息,并根据所述观看者的位姿信息确定所述观看者在所述三维空间坐标系中的所述空间位置信息。
  10. 根据权利要求9所述的内容显示方法,其特征在于,所述方法还包括:
    根据观看者选择指令,确定所述观看者。
  11. 一种内容显示装置,其特征在于,包括:
    可视区域确定模块,用于根据观看者的空间位置信息以及可视角度,确定显示设备中与所述观看者对应的可视区域;
    第一显示区域确定模块,用于确定书写者在所述显示设备的书写区域,并将所述显示设备中所述书写区域之外的显示区域确定为候选显示区域;
    第二显示区域确定模块,用于根据所述可视区域,在所述候选显示区域中为所述观看者确定与所述书写区域对应的目标显示区域;
    内容显示模块,用于在所述目标显示区域内,显示所述书写区域的书写内容。
  12. 一种电子设备,其特征在于,包括:
    处理器;以及
    存储器,用于存储一个或多个程序,当所述一个或多个程序被所述处理器执行时,使得所述处理器实现如权利要求1~10任一项所述的方法。
  13. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时,实现如权利要求1~10任一项所述的方法。
PCT/CN2021/126493 2021-10-26 2021-10-26 内容显示方法、内容显示装置、存储介质及电子设备 WO2023070329A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US18/685,197 US20240346965A1 (en) 2021-10-26 2021-10-26 Method for displaying content, storage medium, and electronic device
PCT/CN2021/126493 WO2023070329A1 (zh) 2021-10-26 2021-10-26 内容显示方法、内容显示装置、存储介质及电子设备
CN202180003075.3A CN116348840A (zh) 2021-10-26 2021-10-26 内容显示方法、内容显示装置、存储介质及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/126493 WO2023070329A1 (zh) 2021-10-26 2021-10-26 内容显示方法、内容显示装置、存储介质及电子设备

Publications (2)

Publication Number Publication Date
WO2023070329A1 true WO2023070329A1 (zh) 2023-05-04
WO2023070329A9 WO2023070329A9 (zh) 2024-02-15

Family

ID=86158998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/126493 WO2023070329A1 (zh) 2021-10-26 2021-10-26 内容显示方法、内容显示装置、存储介质及电子设备

Country Status (3)

Country Link
US (1) US20240346965A1 (zh)
CN (1) CN116348840A (zh)
WO (1) WO2023070329A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009116727A (ja) * 2007-11-08 2009-05-28 Sharp Corp 画像入力表示装置
CN104346125A (zh) * 2014-09-30 2015-02-11 福建扬航电子技术有限公司 一种同屏分区显示的方法及装置
CN106970681A (zh) * 2017-02-21 2017-07-21 广州视源电子科技股份有限公司 书写显示方法及其系统
CN111414114A (zh) * 2020-03-18 2020-07-14 北京星网锐捷网络技术有限公司 一种显示调整方法、装置、电子设备及存储介质
CN112887689A (zh) * 2021-01-21 2021-06-01 联想(北京)有限公司 一种显示方法及装置
US20210232294A1 (en) * 2020-01-27 2021-07-29 Fujitsu Limited Display control method and information processing apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009116727A (ja) * 2007-11-08 2009-05-28 Sharp Corp 画像入力表示装置
CN104346125A (zh) * 2014-09-30 2015-02-11 福建扬航电子技术有限公司 一种同屏分区显示的方法及装置
CN106970681A (zh) * 2017-02-21 2017-07-21 广州视源电子科技股份有限公司 书写显示方法及其系统
US20210232294A1 (en) * 2020-01-27 2021-07-29 Fujitsu Limited Display control method and information processing apparatus
CN111414114A (zh) * 2020-03-18 2020-07-14 北京星网锐捷网络技术有限公司 一种显示调整方法、装置、电子设备及存储介质
CN112887689A (zh) * 2021-01-21 2021-06-01 联想(北京)有限公司 一种显示方法及装置

Also Published As

Publication number Publication date
US20240346965A1 (en) 2024-10-17
CN116348840A (zh) 2023-06-27
WO2023070329A9 (zh) 2024-02-15

Similar Documents

Publication Publication Date Title
Sereno et al. Collaborative work in augmented reality: A survey
US10694146B2 (en) Video capture systems and methods
Francone et al. Using the user's point of view for interaction on mobile devices
US9886102B2 (en) Three dimensional display system and use
US10957103B2 (en) Dynamic mapping of virtual and physical interactions
US9335888B2 (en) Full 3D interaction on mobile devices
WO2021213067A1 (zh) 物品显示方法、装置、设备及存储介质
Wang et al. Distanciar: Authoring site-specific augmented reality experiences for remote environments
JP7519390B2 (ja) 新規ビュー合成のためのニューラルブレンド
EP2102822A1 (en) Method and apparatus for generating stereoscopic image from two-dimensional image by using mesh map
CN111462339B (zh) 增强现实中的显示方法和装置、介质和电子设备
WO2023040609A1 (zh) 三维模型风格化方法、装置、电子设备及存储介质
US11481960B2 (en) Systems and methods for generating stabilized images of a real environment in artificial reality
US9001157B2 (en) Techniques for displaying a selection marquee in stereographic content
CN110286906B (zh) 用户界面显示方法、装置、存储介质与移动终端
Cao et al. Feature guided path redirection for vr navigation
CN113920282A (zh) 图像处理方法和装置、计算机可读存储介质、电子设备
Wischgoll Display systems for visualization and simulation in virtual environments
WO2023070329A1 (zh) 内容显示方法、内容显示装置、存储介质及电子设备
US20230260218A1 (en) Method and apparatus for presenting object annotation information, electronic device, and storage medium
JP2004356789A (ja) 立体映像表示装置及びプログラム
Du Fusing multimedia data into dynamic virtual environments
Besada et al. Design and user experience assessment of Kinect-based Virtual Windows
CN115278202B (zh) 显示方法及装置
CN114339120A (zh) 沉浸式视频会议系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21961705

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18685197

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE