WO2023228565A1 - Dispositif de commande d'affichage - Google Patents

Dispositif de commande d'affichage Download PDF

Info

Publication number
WO2023228565A1
WO2023228565A1 PCT/JP2023/013156 JP2023013156W WO2023228565A1 WO 2023228565 A1 WO2023228565 A1 WO 2023228565A1 JP 2023013156 W JP2023013156 W JP 2023013156W WO 2023228565 A1 WO2023228565 A1 WO 2023228565A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
user
real
amount
visual field
Prior art date
Application number
PCT/JP2023/013156
Other languages
English (en)
Japanese (ja)
Inventor
裕一 市川
真治 木村
修 後藤
宏暢 藤野
拓郎 栗原
健吾 松本
泰士 山本
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Publication of WO2023228565A1 publication Critical patent/WO2023228565A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators

Definitions

  • the present invention relates to a display control device.
  • Patent Document 1 relates to a technology for allowing users to comfortably experience AR technology.
  • the virtual object is displayed while the user is stopped, and the display of the virtual object is stopped while the user is walking. At this time, since it would give the user a sense of discomfort if the virtual objects disappeared all at once, a plurality of virtual objects are sequentially moved out of the field of view.
  • While displaying virtual objects using XR technology is useful for users, depending on the user's surroundings, it may be desirable to limit the amount of virtual objects displayed. Specifically, if many virtual objects are displayed in a situation where, for example, the user is surrounded by crowds and must pay attention to walking, the user may find the virtual objects bothersome.
  • the above-mentioned conventional technology switches the display or non-display of the virtual object using a change in the user's behavior as a trigger, and does not change the display of the virtual object in accordance with the user's surrounding situation.
  • An object of the present invention is to display an appropriate amount of virtual objects according to the user's surroundings.
  • a display control device includes an acquisition unit that acquires a captured image of an area including a visual field range of a user in real space; a detection unit that detects the amount of the real object; and at least one virtual object that is superimposed and displayed on the visual field range from among the at least one virtual object that is associated with the visual field range based on the amount of the at least one real object. and a determination unit that determines the amount of the object.
  • an appropriate amount of virtual objects is displayed according to the user's surroundings.
  • FIG. 1 is a block diagram showing the configuration of a system 1 according to an embodiment.
  • 1 is a block diagram showing the configuration of a terminal device 10.
  • FIG. 2 is a block diagram showing the configuration of a server 20.
  • FIG. 3 is a schematic diagram showing an example of a captured image P.
  • FIG. 2 is a schematic diagram showing an example of a user's visual field U that is visually recognized via the terminal device 10.
  • FIG. 3 is a schematic diagram showing an example of a captured image P.
  • FIG. 2 is a schematic diagram showing an example of a user's visual field U that is visually recognized via the terminal device 10.
  • FIG. 3 is a schematic diagram showing an example of a captured image P.
  • FIG. 1 is a block diagram showing the configuration of a system 1 according to an embodiment.
  • FIG. 1 is a block diagram showing the configuration of a terminal device 10.
  • FIG. 2 is a block diagram showing the configuration of a server 20.
  • FIG. 3 is a schematic diagram showing an example of
  • FIG. 2 is a schematic diagram showing an example of a user's visual field U that is visually recognized via the terminal device 10.
  • FIG. 3 is a schematic diagram showing an example of a captured image P.
  • FIG. 2 is a schematic diagram showing an example of a user's visual field U that is visually recognized via the terminal device 10.
  • FIG. 3 is a schematic diagram showing an example of a captured image P.
  • FIG. 2 is a schematic diagram showing an example of a user's visual field U that is visually recognized via the terminal device 10.
  • FIG. 3 is a flowchart showing the operation of the processing device 108.
  • FIG. 1 is a block diagram showing the configuration of a system 1 according to an embodiment.
  • the system 1 includes a terminal device 10 and a server 20.
  • the terminal device 10 is an example of a display control device.
  • the terminal device 10 and the server 20 are connected via a communication network N. Note that although only one terminal device 10 is illustrated in FIG. 1, the system 1 can include any number of terminal devices 10.
  • the system 1 is a system that presents various information to a user holding a terminal device 10 using AR technology.
  • the AR technology is a technology that displays a virtual object V (see FIG. 4B, etc.) superimposed on a real space, thereby allowing the user to view the virtual object V as if it existed in the real space. That is, the terminal device 10 displays the virtual object V associated with the real space to the user.
  • the virtual object V is, for example, a still image, a moving image, a 3DCG model, text, or the like. Note that when displaying the virtual object V, the terminal device 10 may output other types of information, such as audio information.
  • the terminal device 10 is, for example, a see-through head-mounted display, or a mobile information processing terminal such as a smartphone or a tablet. In this embodiment, it is assumed that the terminal device 10 is a see-through head-mounted display.
  • the server 20 stores a plurality of virtual object data D (see FIG. 3) for displaying the virtual object V on the terminal device 10. Each virtual object data D corresponds to a different virtual object V on a one-to-one basis.
  • the terminal device 10 receives the virtual object data D from the server 20 and displays the virtual object V corresponding to the virtual object data D.
  • FIG. 2 is a block diagram showing the configuration of the terminal device 10.
  • the shape of the terminal device 10 is, for example, the same shape as general eyeglasses.
  • the terminal device 10 has a left lens placed in front of the user's left eye, a right lens placed in front of the user's right eye, and a frame that supports the left lens and the right lens.
  • the frame has a bridge provided between the left lens and the right lens, and a pair of temples that extend over the left and right ears.
  • the terminal device 10 includes a projection device 101, an imaging device 102, a communication device 103, a GPS device 104, a storage device 107, a processing device 108, and a bus 109.
  • Each configuration shown in FIG. 2 is stored, for example, in a frame.
  • the projection device 101, the imaging device 102, the communication device 103, the GPS device 104, the storage device 107, and the processing device 108 are interconnected by a bus 109 for communicating information.
  • the bus 109 may be configured using a single bus, or may be configured using different buses for each element such as a device.
  • the projection device 101 includes left and right lenses, a display panel, and optical members.
  • the display panel and the optical member are housed in a frame, for example.
  • the display panel and the optical member may be provided in pairs on each side, corresponding to the left and right lenses.
  • the projection device 101 displays a projected image of the virtual object V on a display panel based on control from the processing device 108.
  • the display panel is, for example, a liquid crystal panel or an organic EL (Electro Luminescence) panel.
  • the optical member guides the light emitted from the display panel to the left and right lenses.
  • Each of the left lens and right lens has a half mirror.
  • the half mirrors of the left lens and the right lens transmit the light representing the real space, thereby guiding the light representing the real space to the user's eyes. Further, the half mirrors included in the left lens and the right lens reflect the light indicating the virtual object V guided by the optical member toward the user's eyes.
  • the left lens and right lens function as a transmissive display.
  • the imaging device 102 images a subject and outputs captured image information indicating the captured image (hereinafter referred to as "captured image P").
  • the imaging device 102 includes, for example, an imaging optical system and an imaging element.
  • the imaging optical system is an optical system that includes at least one imaging lens.
  • the imaging lens is arranged, for example, toward the above-mentioned bridge, with the user's field of view U facing. For this reason, the imaging device 102 images an area including the user's visual field range U in the real space.
  • the captured image P of the imaging device 102 is an image obtained by capturing an area including the user's visual field range U in the real space.
  • the imaging optical system may include various optical elements such as a prism, or may include a zoom lens, a focus lens, or the like.
  • the image sensor is, for example, a CCD (Charge Coupled Device) image sensor or a CMOS (Complementary MOS) image sensor.
  • each pixel of the captured image P captured by the imaging device 102 and each pixel of the transmissive display realized by the left and right lenses is calibrated in advance. That is, it is known which position of the left and right lenses the object appearing in the captured image P occupies when viewed from the user wearing the terminal device 10. Therefore, the display control unit 115, which will be described later, can project the virtual object V whose display position is determined based on the captured image P onto the left and right lenses.
  • the communication device 103 communicates with the server 20 using wireless communication or wired communication.
  • the communication device 103 includes an interface connectable to the communication network N, and communicates with the communication device 203 of the server 20 (see FIG. 3) via the communication network N.
  • the GPS device 104 receives radio waves from a plurality of satellites, and generates terminal position information indicating the position of the terminal device 10 from the received radio waves.
  • the terminal location information may be in any format as long as it can specify the location in real space. In this embodiment, latitude and longitude are used as terminal location information. Note that the terminal location information may be obtained using means other than the GPS device 104. For example, information specifying the name of the facility where the terminal device 10 is located and the location in the facility may be transmitted using a beacon or the like provided in the facility.
  • the terminal location information is transmitted by the communication device 103 to the server 20 via the communication network N.
  • sensors such as a geomagnetic sensor, an acceleration sensor, an angular acceleration sensor, or an inertial measurement unit (IMU) may be used to determine the location of the terminal device 10. and posture may be detected.
  • IMU inertial measurement unit
  • the storage device 107 is a recording medium that can be read by the processing device 108.
  • the storage device 107 includes, for example, nonvolatile memory and volatile memory. Examples of nonvolatile memories include ROM (Read Only Memory), EPROM (Erasable Programmable Read Only Memory), and EEPROM (Electrically Erasable Memory). Programmable Read Only Memory).
  • the volatile memory is, for example, RAM (Random Access Memory).
  • Storage device 107 stores program PG1.
  • the program PG1 is a program for operating the terminal device 10.
  • the processing device 108 includes one or more CPUs (Central Processing Units).
  • CPUs Central Processing Units
  • One or more CPUs are an example of one or more processors.
  • a processor and a CPU are each an example of a computer.
  • the processing device 108 reads the program PG1 from the storage device 107.
  • the processing device 108 functions as the first acquisition section 111, the second acquisition section 112, the detection section 113, the determination section 114, and the display control section 115 by executing the program PG1.
  • At least one of the first acquisition unit 111, the second acquisition unit 112, the detection unit 113, the determination unit 114, and the display control unit 115 is a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit) t), PLD (Programmable Logic Device) ) and FPGA (Field Programmable Gate Array). Details of the first acquisition unit 111, second acquisition unit 112, detection unit 113, determination unit 114, and display control unit 115 will be described later.
  • the terminal device 10 when the terminal device 10 is a mobile information processing terminal such as a smartphone or a tablet, the terminal device 10 includes a display instead of the projection device 101.
  • the terminal device 10 displays a captured image P captured by the imaging device 102 on a display.
  • the terminal device 10 displays a virtual object V corresponding to the virtual object data D acquired from the server 20, superimposed on the captured image P displayed on the display.
  • FIG. 3 is a block diagram showing the configuration of the server 20.
  • Server 20 includes a communication device 203, a storage device 205, a processing device 206, and a bus 207.
  • the communication device 203, the storage device 205, and the processing device 206 are interconnected by a bus 207 for communicating information.
  • the bus 207 may be configured using a single bus, or may be configured using different buses for each device.
  • the communication device 203 communicates with the terminal device 10 using wireless communication or wired communication.
  • the communication device 203 includes an interface connectable to the communication network N, and communicates with the terminal device 10 via the communication network N.
  • the storage device 205 is a recording medium that can be read by the processing device 206.
  • Storage device 205 includes, for example, nonvolatile memory and volatile memory.
  • Non-volatile memories are, for example, ROM, EPROM and EEPROM.
  • Volatile memory is, for example, RAM.
  • the storage device 205 stores a program PG2 and a plurality of virtual object data D.
  • the program PG2 is a program for operating the server 20.
  • the virtual object data D is data for outputting the virtual object V on the terminal device 10.
  • the plurality of virtual object data D correspond to different virtual objects V, respectively.
  • Each virtual object data D is associated with object position information indicating the position in real space where the virtual object V based on the virtual object data D is displayed. Note that when the same virtual object V is displayed at multiple locations in the real space, for example, one virtual object data D is associated with multiple pieces of object position information.
  • the processing device 206 includes one or more CPUs.
  • One or more CPUs are an example of one or more processors.
  • a processor and a CPU are each an example of a computer.
  • the processing device 206 reads the program PG2 from the storage device 205.
  • the processing device 206 functions as the operation control unit 210 by executing the program PG2.
  • the operation control unit 210 may be configured by a circuit such as a DSP, an ASIC, a PLD, and an FPGA.
  • the operation control unit 210 controls the operation of the server 20.
  • the operation control unit 210 acquires terminal position information indicating the position of the terminal device 10 from the terminal device 10.
  • the operation control unit 210 receives terminal position information via the communication network N using, for example, the communication device 203.
  • the operation control unit 210 selects virtual object data D to be transmitted to the terminal device 10 based on the terminal position information.
  • the operation control unit 210 controls, for example, among the plurality of virtual object data D, virtual object data D in which the position indicated by the object position information and the position indicated by the terminal position information are within a predetermined distance from each other to the terminal device 10. is selected as the virtual object data D to be transmitted to.
  • the operation control unit 210 transmits the selected virtual object data D to the terminal device 10.
  • the operation control unit 210 transmits the virtual object data D to the terminal device 10 via the communication network N using, for example, the communication device 203.
  • the first acquisition unit 111 acquires virtual object data D from the server 20.
  • the first acquisition unit 111 controls the communication device 103 to transmit the terminal position information generated by the GPS device 104 to the server 20, for example.
  • the server 20 that has received the terminal location information selects the virtual object data D based on the terminal location information and transmits it to the terminal device 10, as described above.
  • the first acquisition unit 111 acquires the virtual object data D received by the communication device 103.
  • the second acquisition unit 112 acquires a captured image P of an area including the user's visual field U in the real space.
  • the second acquisition unit 112 is an example of an acquisition unit. In this embodiment, the second acquisition unit 112 acquires the captured image P captured by the imaging device 102.
  • the detection unit 113 detects the amount of at least one real object R located in the viewing range U based on the captured image P.
  • the detection unit 113 extracts the real object R appearing in the captured image P using, for example, an image analysis technique, and detects the amount thereof.
  • the amount of the real objects R reflected in the captured image P is the amount of the real objects R located in the visual field range U.
  • the amount of at least one real object R detected by the detection unit 113 is the number of at least one real object R, or the ratio of at least one real object R to the visual field range U.
  • the ratio that the real object R occupies in the visual field range U may be rephrased as the area of the real object R that occupies in the visual field range U (captured image P).
  • the detection unit 113 may determine the type of each real object R appearing in the captured image P using an image recognition technique among image analysis techniques.
  • the type of real object R is, for example, a building, a tree, a person, a vehicle, or the like.
  • real objects R whose positions do not change, such as buildings and trees, may be classified as fixed objects
  • real objects R, whose positions change, such as people and vehicles may be classified as moving objects. That is, the detection unit 113 may detect at least one fixed object among the real objects R whose position in real space does not change. Further, the detection unit 113 may detect at least one moving object whose position in real space changes.
  • FIG. 4A, FIG. 5A, FIG. 6A, FIG. 7A, and FIG. 8A are schematic diagrams showing an example of the captured image P.
  • FIG. 4A shows a captured image P1, which is an example of the captured image P.
  • the captured image P1 shows three-story buildings R1 to R7 and people R8 to R11 as real objects R. Therefore, the detection unit 113 detects that there are 11 real objects R within the visual field U of the user. Furthermore, in FIG. 4A, the proportion of the captured image P1 occupied by the real object R is about two-thirds. Therefore, the detection unit 113 detects that the proportion of the real object R that occupies the user's visual field U is approximately two-thirds.
  • the unit in which the real object R is counted as one is arbitrary. Specifically, for example, a bag held by person R9 may be counted as one object, or windows and doors provided in buildings R1 to R7 may be counted as one object.
  • the administrator who manages the virtual object V may decide, for example, which unit of the real object R is defined as one. Further, the type of object to be counted as the real object R may also be determined by the administrator who manages the virtual object V, for example. For example, the road shown in the captured image P1 may be counted as one of the real objects R.
  • a captured image P2 which is an example of the captured image P
  • real objects R include buildings R1 to R7 and people R8 to R19. Therefore, the detection unit 113 detects that there are 19 real objects R within the visual field U of the user. That is, when the captured image P2 and the captured image P1 are compared, the captured image P2 has eight more real objects R. Furthermore, since there are eight more real objects R, the proportion of the real objects R occupying the captured image P2 in FIG. 5A is larger than the proportion of the real objects R occupying the captured image P1 of FIG. 4A.
  • FIG. 6A shows a captured image P3, which is an example of the captured image P.
  • the captured image P3 includes buildings R1 to R7 and people R8 to R11.
  • the buildings R1 to R6 are high-rise buildings and have a larger area on the image than the captured image P1 shown in FIG. 4A. That is, in the captured image P3 shown in FIG. 6A, the proportion of the real object R that occupies the user's visual field U is larger than in the captured image P1 shown in FIG. 4A. Specifically, the proportion of the real object R in the captured image P3 is about three-quarters.
  • the detection unit 113 detects that the proportion of the real object R that occupies the user's visual field U is approximately three-quarters.
  • FIG. 7A shows a captured image P4, which is an example of the captured image P.
  • real objects R include buildings R1 to R3, R5, and R6 and people R9 to R11. Therefore, the detection unit 113 detects that eight real objects R are present in the visual field U of the user. That is, when comparing the captured image P4 and the captured image P1, the captured image P4 has four fewer real objects R. Furthermore, since the number of real objects R is four fewer, the proportion of the real objects R occupying the captured image P4 in FIG. 7A is smaller than the proportion of the real objects R occupying the captured image P1 in FIG. 4A.
  • FIG. 8A shows a captured image P5, which is an example of the captured image P.
  • the captured image P5 includes buildings R1 to R7 and people R8 to R11.
  • the buildings R1 to R6 are two-story buildings, and the area in the image is smaller than that in the captured image P1 shown in FIG. 4A. That is, in the captured image P5 shown in FIG. 8A, the ratio of the real object R occupying the user's visual field U is smaller than in the captured image P1 shown in FIG. 4A.
  • the ratio of the real object R in the captured image P4 is about 1/2.
  • the detection unit 113 detects that the proportion of the real object R that occupies the user's visual field U is approximately one-half.
  • the determining unit 114 selects at least one virtual object V that is superimposed on the visual field U from among the at least one virtual object V associated with the visual field U. Determine the amount of virtual objects V.
  • the virtual object V associated with the visual field range U is a virtual object V whose position indicated by the object position information is included in the visual field range U.
  • the amount of at least one virtual object V determined by the determining unit 114 is the number of at least one virtual object V to be superimposed and displayed in the viewing range U, or the amount of at least one virtual object V to be superimposed and displayed in the viewing range U is within the viewing range. This is the proportion that occupies U.
  • the ratio of the virtual object V occupying the viewing range U may be rephrased as the area that the virtual object V occupies in the viewing range U.
  • the determining unit 114 determines the amount of the virtual object V to be actually displayed based on the amount of the real object R in the visual field U of the user. Specifically, the determining unit 114 decreases the amount of virtual objects V to be displayed to the user as the amount of real objects R located in the visual field U of the user increases. The smaller the amount of virtual objects V displayed to the user, the larger the amount of virtual objects V displayed to the user.
  • FIG. 4B, FIG. 5B, FIG. 6B, FIG. 7B, and FIG. 8B are schematic diagrams illustrating an example of a user's visual field U that is visually recognized via the terminal device 10.
  • FIG. 5B, FIG. 6B, FIG. 7B, and FIG. 8B the reference numeral of the real object R is omitted from the viewpoint of ensuring visibility.
  • the virtual object V is shown by hatching.
  • virtual objects V1 to V5 are displayed together with the real space corresponding to the captured image P1 shown in FIG. 4A.
  • the virtual object V1 is an image containing text indicating a guide to a cafe that enters the building R1.
  • Virtual object V2 is an image of a character.
  • Virtual object V3 is an image of a new product bicycle.
  • Virtual object V4 is an image that includes text that describes the bicycle of virtual object V3.
  • Virtual object V5 is an image of a bird.
  • FIG. 5B is a diagram showing the user's visual field range U2 corresponding to the captured image P2 of FIG. 5A. .
  • the determining unit 114 determines that a smaller number of virtual objects V than in FIG. 4A will be displayed in the real space corresponding to FIG. 5A.
  • the determining unit 114 determines to display virtual objects V1, V4, and V5, as shown in FIG. 5B, for example.
  • the number of virtual objects V displayed in the viewing range U2 shown in FIG. 5B is two fewer.
  • the relationship between the number of real objects R in the user's visual field U and the number of displayed virtual objects V may be fixed. For example, if the number of real objects R in the user's visual field U is Nr and the number of virtual objects V to be displayed is Nv, Nv may be determined as a function of Nr.
  • the relationship between the number of real objects R in the user's field of view U and the number of virtual objects V displayed can be determined by, for example, weather conditions (presence of precipitation, brightness of the sky, whether or not there is backlight, etc.), time of day, etc. (ambient brightness) and whether or not the user is moving.
  • weather conditions presence of precipitation, brightness of the sky, whether or not there is backlight, etc.
  • time of day etc.
  • ambient brightness ambient brightness
  • the number of virtual objects V to be displayed may be reduced compared to when there is no precipitation.
  • the detection unit 113 further detects at least one of the weather conditions in the viewing range U, the current time, and whether or not the user is moving.
  • the determining unit 114 determines the amount of at least one virtual object V to be displayed to the user based on at least one of weather conditions, current time, and whether or not the user is moving.
  • the determining unit 114 reduces the amount of virtual objects V displayed to the user when the number of real objects R in the user's visual field U is large. Therefore, the visibility of the real object R is ensured. Furthermore, if there are too many objects to which the user pays attention, there is a possibility that the user's attention will be distracted. By changing the number of displayed virtual objects V based on the number of real objects R, the usefulness of displaying the virtual objects V is improved.
  • FIG. 6B is a diagram showing the user's visual field U3 corresponding to the captured image P3 of FIG. 6A.
  • the proportion of the real object R in the user's visual field U is larger than in the captured image P1 shown in FIG. 4A. Therefore, the determining unit 114 displays the virtual object V in an area smaller than that in FIG. 4A in the real space corresponding to FIG. 6A, and reduces the proportion of the virtual object V occupying the user's visual field U.
  • the determining unit 114 determines to display virtual objects V1 to V5, for example, as shown in FIG. 6B.
  • the virtual objects V1 and V4 have less text and a smaller display area compared to FIG. 4B.
  • Such a display can be realized, for example, by including a plurality of text sentences corresponding to the display area in the virtual object data D corresponding to the virtual object V1 and the virtual object data D corresponding to the virtual object V4.
  • the determining unit 114 can change the display area of the virtual objects V1 and V4 by determining which of the plurality of text sentences to display.
  • the character display of the virtual object V2 is also smaller than in FIG. 4B.
  • the determining unit 114 may change the proportion of the user's visual field U that the virtual object V occupies by changing the display size of the virtual object V.
  • the determining unit 114 may reduce the proportion of the visual field U3 occupied by the virtual objects V by reducing the number of virtual objects V, for example as in [1-1].
  • the determining unit 114 may reduce the proportion of the visual field U3 occupied by the virtual object V by reducing the font of the text instead of reducing the amount of text.
  • the relationship between the ratio of the real object R occupying the visual field U and the ratio of the virtual object V occupying the user's visual field U may be fixed, or may vary depending on weather conditions and time. The adjustment may be made in consideration of at least one of the following: or whether the user is moving.
  • the determining unit 114 reduces the proportion of the virtual object V occupying the visual field range U of the user. Therefore, the visibility of the real object R is ensured. Furthermore, in FIG. 6B, the number of virtual objects V is not increased or decreased, but the size of the virtual objects V is changed, so that the proportion of the virtual objects V occupying the user's visual field U is reduced. Therefore, the fairness of the display of the virtual object V is guaranteed to a certain extent. This method is effective, for example, when the virtual object V is an advertisement.
  • FIG. 7B is a diagram showing the user's visual field range U4 corresponding to the captured image P4 of FIG. 7A. .
  • the determining unit 114 determines that a larger number of virtual objects V than in FIG. 4A are to be displayed in the real space corresponding to FIG. 7A.
  • the determining unit 114 determines to display virtual objects V1 to V8, as shown in FIG. 7B, for example.
  • FIG. 4B Compared to FIG. 4B, there are three more virtual objects V displayed in the viewing range U4 shown in FIG. 7B.
  • Virtual objects V1 to V5 are also displayed in FIG. 4B, while virtual objects V6 to V8 are displayed only in FIG. 7B.
  • the virtual object V6 is an image of a bird like the virtual object V5.
  • the virtual object V7 is an image of a large tree structure.
  • the virtual object V8 is an image containing text indicating a guide to the tree structure.
  • the determining unit 114 increases the amount of virtual objects V to be displayed to the user when the number of real objects R in the user's visual field U is small. Therefore, more virtual objects V can be displayed, and the range of expression using virtual objects V becomes wider.
  • FIG. 8B is a diagram showing the user's visual field U5 corresponding to the captured image P5 of FIG. 8A.
  • the proportion of the visual field range U occupied by the real object R is smaller than in the captured image P1 shown in FIG. 4A. Therefore, the determining unit 114 displays the virtual object V in a larger area than that in FIG. 4A in the real space corresponding to FIG. 8A, and increases the proportion of the visual field U occupied by the virtual object V.
  • the determining unit 114 determines to display virtual objects V1 to V5, for example, as shown in FIG. 8B.
  • the virtual objects V1 and V4 have a larger amount of text and a larger display area compared to FIG. 4B.
  • multiple text sentences corresponding to the display area are included in the virtual object data D corresponding to the virtual object V1 and the virtual object data D corresponding to the virtual object V4. This can be achieved by
  • the display of the character which is the virtual object V2 is also larger than in FIG. 4B.
  • the determining unit 114 may change the proportion of the visual field U that the virtual object V occupies by changing the display size of the virtual object V. Further, the determining unit 114 may increase the proportion of the visual field U occupied by the virtual objects V by increasing the number of virtual objects V to be displayed to the user, as in [2-1], for example. Further, regarding the virtual objects V1 and V4, the determining unit 114 may increase the proportion of the visual field U occupied by the virtual objects V by increasing the font of the text instead of increasing the amount of text.
  • the determining unit 114 increases the proportion of the virtual object V occupying the user's visual field U. Therefore, it becomes possible to display the virtual object V in a larger size, and the range of expression using the virtual object V becomes wider.
  • the determining unit 114 compares the amount of at least one virtual object V to be displayed in a region of the viewing range U where the amount of at least one real object R is larger than other regions, with other regions. You can reduce it by doing so.
  • An area with a large amount of real objects R is an area where there are many objects that the user should view. If the virtual object V is further displayed in such an area, the user may feel bothered. Therefore, the determining unit 114 may exclude from the display target the virtual object V whose display position overlaps with the area where the amount of the real objects R is large.
  • the determining unit 114 determines not to display the virtual objects V2 and V3 located below (on the road) in the user's visual field U2. Therefore, display of the virtual object V in an area where many real objects R are located is avoided, and the visibility of the real object R can be improved.
  • the determining unit 114 compares the amount of at least one virtual object V to be displayed in an area of the viewing range U where the amount of at least one moving object is larger than other areas. It may be less.
  • a moving object is a real object R whose position changes, such as a person or a vehicle. Unlike fixed objects, moving objects change their positions, so the user needs to watch the movement. If the virtual object V is displayed overlapping the moving object, the user may find it bothersome. Therefore, the determining unit 114 may exclude the virtual object V whose display position overlaps with the moving object from the display target.
  • the determining unit 114 determines not to display the virtual objects V2 and V3 located below (on the road) in the user's visual field U2. Therefore, display of the virtual object V in an area where many moving objects are located is avoided, and visibility of the moving object can be improved.
  • the determining unit 114 may determine the virtual object V to be displayed in the viewing range U as follows.
  • A When the same virtual object V is displayed at multiple locations in the user's visual field U, the determining unit 114 reduces the amount of the virtual object V.
  • B The determining unit 114 decreases the amount of virtual objects V in order from the farthest virtual object V or the closest virtual object V from the user.
  • C The determining unit 114 decreases the amount of virtual objects V in order of display size, starting with the larger virtual object V or the smaller virtual object V.
  • the determining unit 114 determines the virtual object V whose amount is to be reduced based on the user's attribute information or the area where the user is located.
  • E The determining unit 114 determines the virtual object V whose amount is to be reduced based on the type of the virtual object V (eg, text or image, if the virtual object V is an advertisement, an advertised product, etc.).
  • type of the virtual object V eg, text or image, if the virtual object V
  • the case where the amount of the virtual object V is determined based on the amount of the real object R is illustrated, but the transparency of the virtual object V may be changed based on the amount of the real object R, for example. .
  • the transparency of the virtual object V is increased, making the real objects R relatively easier to see.
  • the amount of the real object R is small, the transparency of the virtual object V is lowered, making it relatively easier to see the virtual object V.
  • the display control unit 115 shown in FIG. 2 causes the projection device 101 to display the amount of virtual objects V determined by the determination unit 114. As described above, the projection device 101 displays the virtual object V corresponding to the virtual object data D on the left and right lenses. The user perceives the virtual object V as if it actually exists in the surrounding real space.
  • FIG. 9 is a flowchart showing the operation of processing device 108.
  • the processing device 108 functions as the first acquisition unit 111 and acquires virtual object data D from the server 20 (step S100). As described above, the processing device 108 transmits the terminal location information generated by the GPS device 104 to the server 20, and receives the virtual object data D selected and transmitted by the server 20 based on the terminal location information.
  • the processing device 108 functions as the second acquisition unit 112 and acquires the captured image P captured by the imaging device 102 (step S101).
  • the processing device 108 functions as the detection unit 113, and detects the real object R located in the user's visual field U based on the captured image P (step S102).
  • the processing device 108 functions as the determining unit 114, and determines the amount of virtual objects V to be displayed superimposed on the visual field U based on the amount of the real objects R present in the visual field U of the user (step S103).
  • the processing device 108 functions as the display control unit 115, and causes the projection device 101 to display the amount of virtual objects V determined by the determination unit 114 (step S104).
  • the processing device 108 returns to step S100 and repeats the subsequent processing.
  • the terminal device 10 displays the amount of virtual objects V superimposed on the visual field U based on the amount of the real objects R located in the visual field U of the user. Determine. Therefore, the amount of the virtual object V displayed superimposed on the user's visual field U can be kept within an appropriate range according to the user's surrounding situation, and the usefulness of the display of the virtual object V can be increased.
  • the terminal device 10 determines the number of virtual objects V to be superimposed on the visual field U, or the virtual Determine the proportion of the visual field U that the object V occupies. Therefore, visibility of the real object R from the user is ensured.
  • the terminal device 10 displays a smaller amount of virtual objects V in a region of the viewing range U where the amount of real objects R is larger than other regions compared to other regions. Therefore, displaying the virtual object V in an area where there are many objects that the user should view can be avoided, and user convenience can be improved.
  • the terminal device 10 displays a smaller amount of virtual objects V in an area of the viewing range U where the amount of moving objects is larger than in other areas, compared to other areas. Therefore, the visibility of the moving object whose movement the user needs to watch is ensured, and the user's convenience can be improved.
  • the terminal device 10 determines the amount of the virtual object V to be displayed superimposed on the user's visual field U based on at least one of weather conditions, the current time, or whether or not the user is moving. Therefore, the surrounding environment is reflected in the amount of the virtual object V superimposed and displayed in the user's visual field U, and the usefulness of the display of the virtual object V can be improved.
  • the position information generated by the GPS device 104 was used as the terminal position information indicating the position of the terminal device 10.
  • the position and orientation of the terminal device 10 can be determined by sensors such as the above-mentioned geomagnetic sensor, acceleration sensor, angular acceleration sensor, or inertial measurement device. may be detected.
  • the server 20 specified the virtual object data D to be transmitted to the terminal device 10 using the terminal position information.
  • the present invention is not limited to this, and the virtual object data D to be transmitted to the terminal device 10 may be specified using the captured image P captured by the imaging device 102 of the terminal device 10.
  • the server 20 associates, for example, a model indicating the shape of the real object R arranged in the real space with the position information of the real object R in advance.
  • the server 20 specifies the position of the terminal device 10 by matching the shape of the real object R shown in the photographed image with the shape of the model, and specifies the virtual object data D to be transmitted to the terminal device 10.
  • a marker indicating the positional information of the place may be placed in advance in the real space, and the server 20 may obtain the positional information by detecting the marker from the captured image P.
  • the terminal device 10 was configured only with a glasses-type terminal device.
  • the terminal device 10 is not limited to this, and may include a glasses-type terminal device and a portable terminal device such as a smartphone, a tablet terminal, or a notebook computer.
  • the eyeglass-type terminal device and the portable terminal device are connected to each other.
  • the terminal device 10 includes a portable terminal device, some or all of the functions of the first acquisition section 111, the second acquisition section 112, the detection section 113, the determination section 114, and the display control section 115 are implemented in the portable terminal device.
  • the device may also be responsible.
  • the terminal device 10 determines the virtual object V to be superimposed and displayed in the visual field U of the user.
  • the server 20 may determine the virtual object V to be displayed superimposed on the user's visual field U. That is, the server 20 may function as a display control device.
  • the processing device 206 of the server 20 realizes the functions of the second acquisition section 112, the detection section 113, and the determination section 114.
  • the processing device 206 of the server 20 realizes the function of a transmission control unit that transmits the amount of virtual objects V determined by the determination unit 114 to the terminal device 10. According to the third modification, the processing load on the terminal device 10 can be reduced.
  • each functional block may be realized using one physically or logically coupled device, or may be realized using two or more physically or logically separated devices directly or indirectly (e.g. , wired, wireless, etc.) and may be realized using a plurality of these devices.
  • the functional block may be realized by combining software with the one device or the plurality of devices.
  • Functions include judgment, decision, judgment, calculation, calculation, processing, derivation, investigation, exploration, confirmation, reception, transmission, output, access, resolution, selection, selection, establishment, comparison, assumption, expectation, consideration, broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, as (signing), but these are limited to I can't do it.
  • a functional block (configuration unit) that performs transmission is called a transmitting unit or a transmitter. In either case, as described above, the implementation method is not particularly limited.
  • information notification may be performed using physical layer signaling (e.g., DCI (Downlink Control Information), UCI (Uplink Control Information)), upper layer signaling (e.g., RRC (Radio Resource Control) signaling, MAC (Medium Access Control) signaling, It may be implemented using broadcast information (MIB (Master Information Block), SIB (System Information Block)), other signals, or a combination thereof.
  • RRC signaling may be referred to as an RRC message, and may be, for example, an RRC Connection Setup message, an RRC Connection Reconfiguration message, or the like.
  • LTE Long Term Evolution
  • LTE-A Long Term Evolution-Advanced
  • SUPER 3G IMT-Advanced
  • 4G 4th generation mobile
  • 5G 5th generation mobile communication system
  • 6G 6th generation mobile communication system
  • xG xth generation on mobile communication system
  • FRA Full Radio Access
  • NR new Radio
  • NX New radio access
  • Future generation radio access FX
  • W-CDMA registered trademark
  • GSM registered trademark
  • CDMA2000 Code Division Multiple Access
  • UMB User Mobile Broad band
  • IEEE 802.11 Wi-Fi (registered trademark)
  • IEEE 802.16 WiMAX (registered trademark)
  • IEEE 802.20 UWB (Ultra-Wide Band
  • Bluetooth registered trademark
  • the specific operations performed by the base station may be performed by its upper node in some cases.
  • various operations performed for communication with a terminal are performed by the base station and other network nodes other than the base station (e.g., MME or It is clear that this could be done by at least one of the following: (conceivable, but not limited to) S-GW, etc.).
  • MME mobile phone
  • S-GW network node
  • Information etc. may be output from the upper layer (or lower layer) to the lower layer (or upper layer). It may be input/output via multiple network nodes.
  • Input/output information, etc. may be stored in a specific location (eg, memory) or may be managed using a management table. Information etc. to be input/output may be overwritten, updated, or additionally written. The output information etc. may be deleted. The input information etc. may be transmitted to other devices.
  • Judgment may be performed using a value expressed by 1 bit (0 or 1), a truth value (Boolean: true or false), or a comparison of numerical values. (For example, comparison with a predetermined value).
  • Software includes instructions, instruction sets, codes, code segments, program codes, programs, software, firmware, middleware, microcode, hardware description languages, or other names. Should be construed broadly to mean a subprogram, software module, application, software application, software package, routine, subroutine, object, executable, thread of execution, procedure, function, etc. Additionally, software, instructions, information, etc. may be sent and received via a transmission medium.
  • the software may use wired (coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.) and/or wireless technologies (infrared, microwave, etc.) to access a website, When transmitted from a server or other remote source, these wired and/or wireless technologies are included within the definition of transmission medium.
  • the information, signals, etc. described in this disclosure may be represented using any of a variety of different technologies.
  • data, instructions, commands, information, signals, bits, symbols, chips, etc. which may be referred to throughout the above description, may refer to voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. It may also be represented by a combination of Note that terms explained in this disclosure and terms necessary for understanding this disclosure may be replaced with terms having the same or similar meanings.
  • at least one of the channel and the symbol may be a signal.
  • the signal may be a message.
  • a component carrier (CC) may be called a carrier frequency, a cell, a frequency carrier, or the like.
  • the information, parameters, etc. described in this disclosure may be expressed using absolute values, relative values from a predetermined value, or corresponding It may be expressed using the information of .
  • radio resources may be indicated by an index.
  • the names used for the parameters described above are not restrictive in any respect. Further, the formulas etc. using these parameters may differ from those explicitly disclosed in this disclosure. Since the various channels (e.g. PUCCH, PDCCH, etc.) and information elements may be identified by any suitable designation, the various names assigned to these various channels and information elements are in no way exclusive designations. isn't it.
  • Base Station (BS)," “wireless base station,” “fixed station,” “NodeB,” “eNodeB (eNB),” and “gNodeB ( gNB)", “access point”, “transmission point”, “reception point”, “transmission/reception point”, “cell”, “ Sector”, Terms such as “cell group,” “carrier,” “component carrier,” and the like may be used interchangeably.
  • a base station is sometimes referred to by terms such as macrocell, small cell, femtocell, and picocell.
  • a base station can accommodate one or more (eg, three) cells.
  • the overall coverage area of the base station can be partitioned into multiple smaller areas, and each smaller area is divided into multiple subsystems (e.g., small indoor base stations (RRHs)). Communication services can also be provided by Remote Radio Head).
  • RRHs small indoor base stations
  • Communication services can also be provided by Remote Radio Head).
  • the term "cell” or “sector” refers to part or all of the coverage area of a base station and/or base station subsystem that provides communication services in this coverage.
  • the base station transmitting information to the terminal may be read as the base station instructing the terminal to control/operate based on the information.
  • mobile station MS
  • user terminal user equipment
  • terminal terminal
  • a mobile station is defined by a person skilled in the art as a subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless It may also be referred to as a terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable terminology.
  • At least one of the base station and the mobile station may be called a transmitting device, receiving device, communication device, etc.
  • the base station and the mobile station may be a device mounted on a mobile body, the mobile body itself, or the like.
  • the moving body refers to a movable object, and the moving speed is arbitrary. Naturally, this also includes cases where the moving object is stopped.
  • the mobile object is, for example, a vehicle, a transport vehicle, an automobile, a motorcycle, a bicycle, a connected car, an excavator, a bulldozer, a wheel loader, a dump truck, a forklift, a train, a bus, a cart, a rickshaw, a ship and other watercraft.
  • the mobile object may be a mobile object that autonomously travels based on a travel command. It may be a vehicle (e.g. car, airplane, etc.), an unmanned moving object (e.g. drone, self-driving car, etc.), or a robot (manned or unmanned). good.
  • the base station and the mobile station includes devices that do not necessarily move during communication operations.
  • at least one of the base station and the mobile station may be an IoT (Internet of Things) device such as a sensor.
  • IoT Internet of Things
  • the base station in the present disclosure may be replaced by a user terminal.
  • communication between a base station and a user terminal is replaced with communication between multiple user terminals (for example, it may be called D2D (Device-to-Device), V2X (Vehicle-to-Everything), etc.).
  • D2D Device-to-Device
  • V2X Vehicle-to-Everything
  • each aspect/embodiment of the present disclosure may be applied.
  • a configuration may be adopted in which the user terminal has the functions that the above-described base station has.
  • words such as "up” and “down” may be replaced with words corresponding to inter-terminal communication (for example, "side”).
  • uplink channels, downlink channels, etc. may be replaced with side channels.
  • the user terminal in the present disclosure may be replaced by a base station.
  • the base station may have the functions that the user terminal described above has.
  • determining and “determining” used in the present disclosure may encompass a wide variety of operations.
  • "Judgment” and “decision” include, for example, judging, calculating, computing, processing, deriving, investigating, looking up, searching, in query) (e.g., a search in a table, database, or other data structure), and assuming that an assertion has been made is a “judgment” or “decision.”
  • judgment and “decision” refer to receiving (e.g., receiving information), transmitting (e.g., sending information), input, output, and access.
  • (accessing) may include regarding the act as a "judgment” or “decision.”
  • judgment and “decision” mean that things such as resolving, selecting, choosing, establishing, and comparing are considered to be “judgment” and “decision.” may be included.
  • judgment and “decision” may include regarding some action as having been “judged” or “determined.”
  • judgment (decision) may be read as “assuming", “expecting”, “considering”, etc.
  • connection refers to any direct or indirect connection or combination between two or more elements.
  • the bonds or connections between elements may be physical, logical, or a combination thereof.
  • connection may be replaced with "access.”
  • two elements may include one or more electrical wires, cables, and/or printed electrical connections, as well as in the radio frequency domain, as some non-limiting and non-inclusive examples. , electromagnetic energy having wavelengths in the microwave and optical (both visible and non-visible) ranges.
  • the reference signal can also be abbreviated as RS (Reference Signal), and may be called a pilot depending on the applied standard.
  • At least one of A and B” or “at least one of A or B” means “(A), (B), or , (A and B).
  • at least one of A and B means “one or more of A and B” or “at least one selected from the group of A and B” (at least one selected from the group of A and B).
  • 'at least one of A, B and C' or 'at least one of A, B or C' means '(A), (B), (C) , (A and B), (A and C), (B and C), or (A, B and C).
  • at least one of A, B, and C means “one or more of A, B, and C," or "a group of A, B, and C.” "at least one selected from the group of A, B, and C”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Dans la présente invention, une unité d'acquisition acquiert une image capturée d'une région comprenant la plage de champ visuel d'un utilisateur dans un espace réel. Sur la base de l'image capturée, l'unité de détection détecte la quantité d'au moins un objet réel situé dans la plage de champ visuel. Une unité de détermination détermine la quantité d'au moins un objet virtuel, parmi les objets virtuels dont il existe au moins un et qui sont associés à la plage de champ visuel, qui sont affichés de façon à se chevaucher dans la plage de champ visuel, ladite détermination étant effectuée sur la base de la quantité du ou des objets réels.
PCT/JP2023/013156 2022-05-24 2023-03-30 Dispositif de commande d'affichage WO2023228565A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022084712 2022-05-24
JP2022-084712 2022-05-24

Publications (1)

Publication Number Publication Date
WO2023228565A1 true WO2023228565A1 (fr) 2023-11-30

Family

ID=88918989

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/013156 WO2023228565A1 (fr) 2022-05-24 2023-03-30 Dispositif de commande d'affichage

Country Status (1)

Country Link
WO (1) WO2023228565A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006318095A (ja) * 2005-05-11 2006-11-24 Canon Inc 画像処理方法、画像処理装置
US20170366951A1 (en) * 2016-06-15 2017-12-21 Samsung Electronics Co., Ltd. Method and apparatus for providing augmented reality services
JP2020507797A (ja) * 2016-12-29 2020-03-12 マジック リープ, インコーポレイテッドMagic Leap,Inc. 外部条件に基づくウェアラブルディスプレイデバイスの自動制御

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006318095A (ja) * 2005-05-11 2006-11-24 Canon Inc 画像処理方法、画像処理装置
US20170366951A1 (en) * 2016-06-15 2017-12-21 Samsung Electronics Co., Ltd. Method and apparatus for providing augmented reality services
JP2020507797A (ja) * 2016-12-29 2020-03-12 マジック リープ, インコーポレイテッドMagic Leap,Inc. 外部条件に基づくウェアラブルディスプレイデバイスの自動制御

Similar Documents

Publication Publication Date Title
US11770724B2 (en) Mobile terminal for displaying whether QoS is satisfied in wireless communication system
ES2964291T3 (es) Método y dispositivo para realizar registro en red en sistema de comunicación inalámbrica
CN113439486B (zh) 用于显示用于使用ma pdu会话的信息的方法和终端
US20200027019A1 (en) Method and apparatus for learning a model to generate poi data using federated learning
US11368964B2 (en) Data transmission method, apparatus, and unmanned aerial vehicle
ES2944708T3 (es) Método y aparato para notificar información de ruta de vuelo, y método y aparato para determinar información
EP3716688A1 (fr) Procédé et appareil de transmission de données, et véhicule aérien sans pilote
US10523639B2 (en) Privacy preserving wearable computing device
Monserrat et al. Key technologies for the advent of the 6G
KR20150020918A (ko) 디스플레이 장치 및 그것의 제어 방법
KR102559686B1 (ko) 차량 및 차량 영상 제어방법
US20230120144A1 (en) Communication related to network slice
US9332580B2 (en) Methods and apparatus for forming ad-hoc networks among headset computers sharing an identifier
CN115669023A (zh) 数据感知方法、核心网系统、核心网网元及芯片
CN114731715A (zh) 在无线通信系统中控制与侧链路通信有关的配置的方法和装置
WO2023228565A1 (fr) Dispositif de commande d'affichage
US11317339B2 (en) Communication control method and communication control device
KR20220090167A (ko) 모바일 장치 및 차량
WO2023228856A1 (fr) Dispositif de sélection de données d'objet virtuel
WO2023243300A1 (fr) Dispositif de commande de rendu
CN104034335B (zh) 图像显示的方法和图像采集设备
JP2024039111A (ja) 表示制御装置
CN117397258A (zh) 信息处理方法及装置、通信设备及存储介质
CN114092366A (zh) 图像处理方法、移动终端及存储介质
WO2024085084A1 (fr) Dispositif de commande d'avatar

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23811444

Country of ref document: EP

Kind code of ref document: A1