US20240112405A1 - Data processing method, electronic device, and stroage medium - Google Patents

Data processing method, electronic device, and stroage medium Download PDF

Info

Publication number
US20240112405A1
US20240112405A1 US18/232,730 US202318232730A US2024112405A1 US 20240112405 A1 US20240112405 A1 US 20240112405A1 US 202318232730 A US202318232730 A US 202318232730A US 2024112405 A1 US2024112405 A1 US 2024112405A1
Authority
US
United States
Prior art keywords
value
condition
influential
response
met
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/232,730
Other languages
English (en)
Inventor
Guannan Zhang
Chao Qi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Assigned to LENOVO (BEIJING) LIMITED reassignment LENOVO (BEIJING) LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: QI, Chao, ZHANG, GUANNAN
Publication of US20240112405A1 publication Critical patent/US20240112405A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present disclosure generally relates to the field of information technologies and, more particularly, to a data processing method and a data processing device.
  • a data processing method includes: obtaining a first output image about a first space in a virtual space based on a first value, an output parameter being the first value and the first space including a target object; in response to a first condition being met, adjusting the output parameter from the first value to a second value; and after the output parameter is adjusted to be the second value, obtaining a second output image of a second space in the virtual space based on the second value, the second space including the target object.
  • an electronic device includes a processor, a memory, and a communication bus.
  • the communication bus is configured to realize a communication connection between the process and the memory.
  • the memory is configured to store an information processing program.
  • the processor is configured to execute the information processing program stored in the memory, to: obtain a first output image about a first space in a virtual space based on a first value, an output parameter being the first value and the first space including a target object; in response to a first condition being met, adjust the output parameter from the first value to a second value; and after the output parameter is adjusted to be the second value, obtain a second output image of a second space in the virtual space based on the second value, the second space including the target object.
  • a non-transitory computer readable storage medium configured to store an information processing program.
  • the information processing program is configured to be executed by a device where the non-transitory computer readable storage medium is located, to control the device to: obtain a first output image about a first space in a virtual space based on a first value, an output parameter being the first value and the first space including a target object; in response to a first condition being met, adjust the output parameter from the first value to a second value; and after the output parameter is adjusted to be the second value, obtain a second output image of a second space in the virtual space based on the second value, the second space including the target object.
  • FIG. 1 is a schematic flowchart of an exemplary data processing method consistent with various embodiments of the present disclosure.
  • FIG. 2 is a schematic diagram of a first position and a second position consistent with various embodiments of the present disclosure.
  • FIG. 3 is a schematic diagram of depth detection in a virtual space consistent with various embodiments of the present disclosure.
  • FIG. 4 is a schematic diagram of effective points in a virtual space consistent with various embodiments of the present disclosure.
  • FIG. 5 is a schematic diagram of adjustment of a position of a reference point consistent with various embodiments of the present disclosure.
  • FIG. 6 is a schematic diagram of an exemplary hollowed-out model consistent with various embodiments of the present disclosure.
  • FIG. 7 is a schematic diagram of depth detection on a hollowed-out model consistent with various embodiments of the present disclosure.
  • FIG. 8 is a schematic scenery diagram of an exemplary virtual space consistent with various embodiments of the present disclosure.
  • FIG. 9 is a schematic diagram without image jitter prevention and without Alpha channel rendering consistent with various embodiments of the present disclosure.
  • FIG. 10 is a schematic diagram with image jitter prevention and with Alpha channel rendering consistent with various embodiments of the present disclosure.
  • FIG. 11 is a schematic structural diagram of an exemplary data processing device consistent with various embodiments of the present disclosure.
  • FIG. 12 is a schematic structural diagram of an exemplary wearable device consistent with various embodiments of the present disclosure.
  • FIG. 13 is a schematic structural diagram of an exemplary electronic device consistent with various embodiments of the present disclosure.
  • various embodiments of the present disclosure provide methods, devices, electronic device, and computer readable storage medium for data processing.
  • the present disclosure provides a data processing method. As shown in FIG. 1 , in one embodiment, the method may include S 101 and S 102 .
  • a first output image about a first space in a virtual space may be obtained based on a first value, where an output parameter may be the first value and the first space includes a target object.
  • the virtual space may be a virtual three-dimensional environment displayed (or provided) when a corresponding application program runs on an electronic device, and the application program may be a browser application, a client application, and the like.
  • the virtual space may be a simulation space of the real world, or the virtual space may be a semi-simulation and semi-fictional three-dimensional space, or the virtual space may also be a purely fictional three-dimensional space.
  • the virtual space may include but is not limited to a high-dimensional virtual space such as a three-dimensional virtual space or a four-dimensional virtual space.
  • the present embodiment where the virtual space is a three-dimensional virtual space is used as an example only to illustrate the present disclosure, but does not limit the scope of the present disclosure.
  • the electronic device may be a device with a data processing function. That is, the electronic device may include a processing unit, and may be a mobile phone, a computer, etc. In addition to the processing unit, the electronic device may further include a display unit, and the display unit may display the target content based on the instructions of the display unit.
  • the application program may be an application program capable of supporting the display of a virtual space, and the virtual space may include virtual objects.
  • the application program may be an application program capable of supporting the three-dimensional virtual space.
  • the application program may be any one of a virtual reality (VR) application program or an augmented reality (AR) application program.
  • the application program may also be a three-dimensional (3D) game program.
  • VR virtual reality
  • AR augmented reality
  • a virtual object may be a three-dimensional solid model created based on animation skeleton technology.
  • a virtual object may have its own shape and volume in the three-dimensional virtual space, and may occupy a part of the three-dimensional virtual space.
  • the output parameter may include but is not limited to a reference point parameter, such as a reference point position.
  • the reference point may be a camera used to collect images about the virtual space for generating a final output image by a rendering engine. Therefore, the images captured by the reference point may determine the content of the image output.
  • the output parameter may also include other parameters able to affect the output image.
  • the parameters may include the field of view angle of the reference point (that is, FOV), a depth of field of the reference point, etc.
  • FOV field of view angle of the reference point
  • the reference point in the following embodiments of the present disclosure refers to the camera
  • the reference point parameter refers to the position of the reference point, which is the position of the camera.
  • the first space may be a space where the reference point is used for image acquisition, and objects located in the first space may be acquired and rendered on the screen.
  • the first space may be determined in the following manner. For example, in the viewing frustum of the reference point, a space between a near clipping plane and a far clipping plane may be determined as the first space. In some other embodiments, the first space may also be determined in other suitable ways, which are not limited here.
  • the output parameter may be the reference point position.
  • the reference point position When the reference point position is in different positions, the corresponding acquisition spaces may be different.
  • the reference point When the reference point is at a first position, the reference point may correspond to the first space.
  • the reference point When the reference point is at a second position, the reference point may correspond to a second space.
  • the second space may be similar to the first space, and both the first space and the second space may be acquisition spaces of the reference point.
  • the target object may be the observed object, and the target object may be located in the spatial region where the image is captured by the reference point. Therefore, the target object may be presented in the output image by being captured by the reference point.
  • the output parameter in response to the first condition being met, may be possibly adjusted from the first value to a second value.
  • a second output image about the second space in the virtual space may be obtained based on the second value.
  • the target object may be included in the second space.
  • a strategy for automatic adjustment of the output parameter is provided.
  • the output parameter may be possibly adjusted from the first value to the second value.
  • the second output image that meets viewing needs may be obtained, without the need for the user to manually adjust the output parameter.
  • the defect that some applications do not provide an output parameter adjustment interface may be also overcame.
  • the output parameter may be the position of the reference point, and the reference point may be located in the virtual space.
  • the first value may refer to the first position
  • the second value may refer to the second position.
  • the first output image may be an image about the first space in the virtual space obtained based on the reference point at the first position
  • the second output image may be an image about the second space in the virtual space obtained based on the reference point at the second position.
  • the strategy for automatic adjustment of the output parameter may include S 1021 to S 1023 .
  • an influential object of the target object may be determined according to the target object and the reference point.
  • the first condition may correspond to the fact that the target object is occluded in the first output image
  • the influential object corresponds to an object that causes the target object to be occluded.
  • a second position may be determined based on the influential object.
  • the second position may be the position where the influential object is located.
  • the output parameter may be adjusted from the first position to the second position.
  • the position of the reference point may be adjusted from the original position to the position of the influential object.
  • the adjustment strategy provided by the above embodiment may determine the second position based on the influential object, such that the viewing angle of the second position may not change greatly compared with the first position, after the output parameter is adjusted to the second position, ensuring the continuity of the viewing angle.
  • the influential object in S 1021 may be determined by S 10211 to S 10213 according to principles of computer graphics imaging.
  • a first target position of the target object in the virtual space and a second target position of the reference point in the virtual space may be obtained.
  • the position coordinates (x2, y2, z2) of the target object in the virtual space, that is, the first target position may be obtained.
  • the position coordinates (x1, y1, z1) of the reference point in the virtual space, that is, the second target position may be obtained.
  • a reference line between the target object and the reference point may be obtained.
  • the reference line with a direction may be determined with one of the coordinates as the starting point and the other coordinate as the end point.
  • the reference line R (x1-x2, y1-y2, z1-z3) may be obtained.
  • an object that the reference line passes through may be determined as the influential object of the target object.
  • an object that the reference line R passes through may be determined as the influential object of the target object.
  • the reference line R may imitate the line of sight from the reference point to the target object.
  • the non-target object may inevitably block the target object. Therefore, the non-target object that the reference line passes through may be determined as the influential object of the target object.
  • the non-target object that the reference line R passes through may be determined in the following manner.
  • the coordinate set of points included in the reference line R may be called the first coordinate set, and the coordinate set of points included in each non-target object may be called as the second coordinate set.
  • the non-target object may be taken as the influential object.
  • the points where the first coordinate set coincides with the second coordinate set may be ⁇ point2, point3, point4, point5, point6, point7, point8 ⁇ .
  • the first object may be determined as the influential object.
  • the second object may be determined as the influential object based on point4 and point5, or the third object may be determined as the influential object based on point6, point7 and point8.
  • S 1022 may be performed to determine the second position based on the influential object.
  • the target influential object may be determined from the plurality of influential objects, which may be achieved in different ways. Subsequently, after determining the target influential object, there may be multiple ways to determine the second position according to the target influential object. In the first way, the second position may be located on the target influential object. In the second way, the second position may not be located on the target influential object. Further, each of the first way and the second way may be implemented in different manners, as shown in FIG. 2 .
  • the first position may be the position of the point A.
  • the method may include the following manners.
  • the second position may be located on a surface closest to the target object of the target influential object, such as point B.
  • the second manner the second position may be located at the intersection of the reference line and the surface of the target influential object, such as point C.
  • the method may include the following manners.
  • the first manner the second location may be located on the reference line, such as point D.
  • the second position may not be located on the reference line, but may be located above the horizontal level of the target influential object, such as point E. Since the reference lines leading to the target object from the four positions of B, C, D, and E do not collide with other objects except the target object, by setting the second position at the position of B, C, D, and E, the target object may be kept un-occluded.
  • S 1022 may include:
  • the influential object 1 which is closest to the target object may be selected as the target influential object.
  • the second position may be determined according to the target influential object (that is, the influential object 1 ).
  • the implementation of S 10222 may be made reference to the above description about the points B, C, D, and E in FIG. 2 .
  • S 10221 and S 10222 may ensure that there may be no other objects between the reference point and the target object, thereby ensuring that the target object is able to be displayed without occlusion.
  • another method may be used to solve the problem that the target object is occluded.
  • the method may include adjusting a display parameter of the influential object.
  • the adjusted display parameter may be the transparency of the influential object, and the transparency of the influential object may be adjusted through rendering to make the adjusted transparency higher than the transparency before adjustment. Therefore, the effect that the target object is not occluded may be achieved.
  • the output parameter when the first condition is satisfied, may possibly be adjusted from the first value to the second value.
  • the first condition includes that the target object is detected to be occluded
  • adjustment of the output parameters may be performed every time the target object is detected to be occluded.
  • the output parameter may be adjusted frequently, and the output image may also change frequently (called image jitter), which is not good for viewing by users.
  • a second condition may be introduced as an additional trigger condition for adjusting the output parameter.
  • the output parameter when the first condition and the second condition are satisfied, the output parameter may be adjusted from the first value to the second value. That is to say, the output parameter may not be adjusted to the second value when the first condition is satisfied.
  • the operation of adjusting the output parameter may be performed only when the second condition is also satisfied.
  • the present disclosure provides some optional implementations to prevent image jitter.
  • the implementation may include: in response to the influential object of the target object being detected and the first condition being met, using the moment when the influential object is detected as a first moment; and, within a first time-interval from the first moment, if the influential object is not detected again, determining that the second condition is met and adjusting the output parameter from the first value to the second value at the second moment after the first time-interval elapses from the first moment.
  • the third moment may be taken as the first moment.
  • the image jitter may be prevented by a delayed response.
  • the delayed response may be interpreted as the execution after n seconds of the event being triggered.
  • the timing may be restarted.
  • another implementation to prevent image jitter may include: determining that the first condition is satisfied when the influential object of the target object is detected; when the influential object has a target label, determining that the second condition is satisfied and adjusting the output parameter from the first value to the second value; and, when the influential object does not have a target label, determining that the second condition is not satisfied to not adjust the output parameter from the first value to the second value.
  • prevention of image jitter may be realized by tag response.
  • the tag response may be understood as adjusting the output parameter when the influential object has the target tag and not adjusting the output parameter when the influential object does not have the target tag.
  • the label may be pre-labeled, and the output parameter may be selectively adjusted by pre-labeling the target labels for potentially influential objects. Which potential influential object is marked with the target label may be determined according to many rules.
  • One optional rule may include that: when the ratio of the area of the hollowed-out area of a potential influential object to its total cross-sectional area is small, the potential influential object may be marked with the target label (the hollowed-out area is relatively small, that is, the non-hollowed-out area is large, means that the target object may be blocked by the hollowed-out structure for a long time. Therefore, in response to the target object being blocked by this type of influential object, the reference point position may be adjusted to the second position, which can be adjusted when there is occlusion.
  • the position of the reference point may be adjusted to the second position, and when the target object is not blocked by the hollowed-out area, the position of the reference point may be adjusted back to the default position, which will less affect the viewing effect).
  • the ratio of the area of the hollowed-out area of a potential influential object to its total cross-sectional area is large, that is, most of the hollowed-out object is hollowed out, there may be no need to label this type of hollowed-out objects.
  • another implementation to prevent image jitter may include: determining that the first condition is satisfied when the influential object of the target object is detected; and when the number of times that the influential object is detected satisfies the continuous preset number of times, determining that the second condition is met, and adjusting the output parameter from the first value to the second value.
  • FIG. 6 shows the multiple reference lines corresponding to the reference point at different moments when the target object is followed by the angle of view, and the endpoints of these eight reference lines (indicated by the arrows) show the position of the influential object where the reference lines first hit respectively.
  • Reference line 2 , Reference line 4 , Reference line 6 , and Reference line 8 all hit the hollowed-out model, that is, the influential object is detected, and the first condition is determined to be met.
  • the number of consecutive preset times is 4 consecutive times, for example, in FIG. 7 , when there are 4 consecutive frames corresponding to reference line 2 , reference line 4 , reference line 6 , and reference line 8 each time occlusion detection is performed on each frame, the number of times the influential object is detected is 4 consecutive times, and the second condition is determined to be met and an output parameter adjustment is performed.
  • a time limit may be added to the second condition.
  • the second condition may include that: the number of times that the influential object is detected within a certain preset period of time meets the continuous preset number of times.
  • the output parameter may be adjusted once.
  • the second condition may be determined to be unsatisfied, and the output parameter may be not adjusted.
  • the second condition may also have other settings.
  • another implementation to prevent image jitter may include that: when the number of times that the influential object is detected meets the continuous preset number of times and the target distance between the reference points of the continuous preset number of times and the position of the influential object satisfies the similarity condition, the second condition may be determined to be met and the output parameter may be adjusted from the first value to the second value.
  • the preset number of times may be 4.
  • that the target distance between the reference points of the continuous preset number of times and the position of the influential object satisfies the similarity condition may include the following.
  • the four frames of images may correspond to reference line 2 , reference line 4 , reference line 6 , and reference line 8 .
  • the positions of the reference point position corresponding to the 4 reference lines and the position of the influential object may be obtained.
  • the difference between the position of the reference point corresponding to the reference line and the position of the influential object may be obtained to obtain the target distance of each of the four reference lines, which are recorded as Distance2, Distance4, Distance6, and Distance8.
  • the target distance satisfying the similarity condition may include that the four target distances are relatively similar, that is, the difference between the four target distances does not exceed the distance threshold. For example, the difference between any two of the four target distances does not exceed the distance threshold, or the difference between adjacent two does not exceed the distance threshold.
  • the second condition may be determined to be met.
  • the difference between the target distances is too large, the position gap before and after the adjustment of the reference point may be too large when the position of the reference point needs to be adjusted and there may be too many changes between the adjusted image and the pre-adjusted image. Further, the time-interval between two adjacent frames may be very small. Therefore, to avoid frequent large changes in the output image in a short period of time, the condition that the target distances satisfy the similar condition may be added to the second condition, in the present embodiment.
  • reducing the frequency of occlusion detection may be also used to prevent image jitter.
  • different times may correspond to different reference lines.
  • the intersection points of the reference line and the model in the virtual space at different times may be obtained, and one intersection point closest to the reference point (that is, the first intersection point encountered by the reference line) may be selected as the target intersection point (the target intersection point is on the influential object closest to the reference point), and the distance between the target intersection point and the reference point may be the target distance.
  • the difference of the target distances between two adjacent frames (such as the current frame and the previous frame) is too large may be determined.
  • the counter may be cleared; and when the difference is not too large, the counter may be incremented by 1.
  • the difference of the target distances may be compared for every frame, and the counter may be incremented or cleared according to the comparison result. Before the counter is accumulated to 4, as long as the difference is too large once, the counter may be cleared. When the counter is accumulated to 4, the occlusion detection may be performed.
  • the occlusion detection here may include detecting whether there is an influential object between the reference point and the target object.
  • the occlusion detection may be performed only once when the target distances are not much different in the accumulative 4 frames.
  • the frequency of occlusion detection may be reduced, to further reduce the number of output parameter adjustments, thereby preventing image jitter.
  • another implementation to prevent image jitter may include: when the target object is too much occluded, determining that the second condition is satisfied and adjusting the output parameter; and when the target object is less occluded, determining that the second condition is not satisfied to not adjusting the output parameter. Therefore, the image jitter may be prevented, and the amount of calculation may be reduced to improve the utilization of computing resources.
  • the complex hollowed-out model may be rendered with Alpha channel, to visually render the hollowed-out part with a hollowed-out effect.
  • the actual material of the hollowed-out part is not hollowed out, and may still be detected as a block object when the reference hits the hollowed-out part during reference line penetrating.
  • the output parameter may be directly adjusted to the second position, and may not return to the first position at the hollowed-out place.
  • FIG. 8 is a schematic diagram of a virtual space 800
  • FIG. 9 is images of the virtual space 800 without image jitter prevention processing and without Alpha channel rendering
  • FIG. 10 is images of the virtual space 800 with image jitter prevention processing and Alpha channel rendering.
  • the target object 801 within the viewing angle is blocked multiple times, and dead spots 901 occur in the displayed content.
  • occlusions can be detected more accurately, and the position of the reference point can be dynamically adjusted, ensuring that no occlusions occur within the viewing angle. Further, complexity of the model and the degree of hollowing out can be ignored, and a perspective experience effect without dead angles can be achieved.
  • the present disclosure also provides a data processing device.
  • the data processing device may be used to achieve any data processing method provided by various embodiments of the present disclosure.
  • the data processing device 200 may include: an acquisition module 201 , configured to acquire an output image in a virtual space according to an output parameter; and an adjustment module 202 , configured to adjust the output parameter from a first value to a second value when a first condition is satisfied.
  • the acquisition module 201 may obtain a first output image of the first space in the virtual space.
  • the first space includes a target object.
  • the acquisition module 201 may obtain a second output image of the second space in the virtual space, where the second space includes the target object.
  • the output parameter may include the position of the reference point in the virtual space.
  • the first value may include a first position
  • the second value may include a second position.
  • the acquisition module 201 may obtain the first output image of the first space in the virtual space by obtaining the first output image about the first space in the virtual space based on the reference point at the first position.
  • the acquisition module 201 may obtain the second output image of the second space in the virtual space by obtaining the second output image about the second space in the virtual space based on the reference point located at the second position.
  • the output parameter may be adjusted from the first value to the second value by: determining the influential object of the target object according to the target object and the reference point when the first condition is satisfied; determining the second position based on the influential object; and adjusting the output parameter from the first position to the second position.
  • determining the influential object of the target object according to the target object and the reference point may include: obtaining the first target position of the target object in the virtual space and the second target position of the reference point in the virtual space; determining the reference line between the target object and the reference point based on the first target position and the second target position; and determining an object that the reference line passes through as the influential object of the target object.
  • determining the second position based on the influential object may include: when there are multiple influential objects, selecting an object closest to the target object as the target influential object; and determining the second position based on the position of the target influential object.
  • adjusting the output parameter from the first value to the second value when the first condition is met may include: when the first condition and the second condition are met, adjusting the output parameter from the first value to the second value.
  • adjusting the output parameter from the first value to the second value may include: when the influential object of the target object is detected, determining that the first condition is met and the moment (e.g., time point) when the influential object is detected as the first moment; within the first time-interval from the first moment, if the influential object is not detected again, determining that the second condition is met, and adjusting the output parameter from the first value to the second value at the second moment after the first time-interval elapses from the first moment; and during the first time-interval, if the influential object is detected at the third moment, determining the third moment as the first moment.
  • the influential object of the target object determining that the first condition is met and the moment (e.g., time point) when the influential object is detected as the first moment; within the first time-interval from the first moment, if the influential object is not detected again, determining that the second condition is met, and adjusting the output parameter from the first value to the second value at the second moment after the first time-interval elapses
  • adjusting the output parameter from the first value to the second value may include: when the influential object of the target object is detected, determining that the first condition is met; and when the influential object carries a target label, determining that the second condition is met, and adjusting the output parameter from the first value to the second value.
  • adjusting the output parameter from the first value to the second value may include: when the influential object of the target object is detected, determining that the first condition is met; and when the number of times that the influential object is detected meets a consecutive preset number, determining that the second condition is met, and adjusting the output parameter from the first value to the second value.
  • determining that the second condition is met, and adjusting the output parameter from the first value to the second value may include: when the number of times that the influential object is detected meets the consecutive preset number and the influential object meets a distance similarity condition, determining that the second condition is met, and adjusting the output parameter from the first value to the second value.
  • the output parameter when the first condition is met, the output parameter may be able to be adjusted from the first value to the second value.
  • the output parameter is not adjusted from the first value to the second value as long as the first condition is met, but may be adjusted from the first value to the second value. Therefore, the frequent change of the output image induced by the frequent adjustment of the output parameter since the output parameter is adjusted every time the first condition is met may be avoided, to prevent the influence on the visual effect.
  • the wearable device 300 may include a wearable body 301 and a wearable assembly (not shown in the figure).
  • the wearable body 301 may include a communication unit 302 , a processing unit 303 , and a display unit 304 .
  • the communication unit 302 may be connected to a server, at least for receiving a display image of an application program sent by the server and feeding back an instruction operation to the server such that the server is able to obtain the interactive operation that has a mapping relationship with the instruction operation according to the received instruction operation.
  • the processing unit 303 may be configured to control a virtual object to move in the virtual space in response to the movement adjustment operation acting on the virtual object.
  • the processing unit 303 may also be configured to obtain a first output image about a first space in the virtual space based on the first value when the output parameter is the first value.
  • the first space may include the target object.
  • the output parameter may be adjusted from the first value to the second value.
  • the second output image about the second space in the virtual space may be obtained based on the second value, where the second space may include the target object.
  • the wearable device 300 may establish a connection with the server through the communication unit 302 .
  • the processing unit 303 may be disposed on the wearable body 301 .
  • the processing unit 303 may include a processor, configured to execute any data processing method provided by various embodiments of the present disclosure.
  • the processing unit 303 may include, but is not limited to, a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), a field programmable gate array (FPGA), or any combination thereof.
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • the display unit 304 may be configured to display a display image.
  • the wearable body 301 may include, but not limited to, a shell of the wearable device and peripheral hardware circuits necessary for supporting the normal operation of the communication unit 302 and the processing unit 303 .
  • the output parameter when the first condition is met, the output parameter may be able to be adjusted from the first value to the second value. It should be noted that the output parameter is not adjusted from the first value to the second value as long as the first condition is met, but may be adjusted from the first value to the second value. Therefore, the frequent change of the output image induced by the frequent adjustment of the output parameter since the output parameter is adjusted every time the first condition is met may be avoided, to prevent the influence on the visual effect.
  • the present disclosure also provides an electronic device.
  • the electronic device may be used to perform any data processing method provided by various embodiments of the present disclosure.
  • the electronic device 400 may include a processor 401 , a memory 402 , and a communication bus 403 .
  • the communication bus 403 may be used to realize the communication connection between the processor 401 and the memory 402 .
  • the processor 401 may be used to execute an information processing program stored in the memory 402 , to realize any data processing method provided by various embodiments of the present disclosure.
  • the processor 401 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP), a programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component.
  • the general-purpose processor may be a microprocessor or any conventional processor, etc.
  • the present disclosure also provides a storage medium, e.g., a non-transitory computer readable storage medium, on which executable instructions can be stored, and the executable instructions may be executed by one or more processors to implement any data processing method provided by various embodiments of the present disclosure.
  • a storage medium e.g., a non-transitory computer readable storage medium, on which executable instructions can be stored, and the executable instructions may be executed by one or more processors to implement any data processing method provided by various embodiments of the present disclosure.
  • the storage medium may be a computer-readable storage medium, for example, a ferroelectric memory (FRAM), a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), a flash memory, a magnetic surface memory, an optical disc, an optical disc read only memory (CD-ROM), any other memory, or any combination thereof.
  • FRAM ferroelectric memory
  • ROM read-only memory
  • PROM programmable read-only memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory a magnetic surface memory
  • CD-ROM optical disc read only memory
  • the present disclosure may be provided as methods, systems, or computer program products. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to disk storage and optical storage, etc.) having computer-usable program code embodied therein.
  • a computer-usable storage media including, but not limited to disk storage and optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory may produce a product comprising instruction apparatus.
  • the instruction apparatus may realize the function specified in one or more operations (e.g., steps) of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
US18/232,730 2022-09-30 2023-08-10 Data processing method, electronic device, and stroage medium Pending US20240112405A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211218227.4A CN115564932A (zh) 2022-09-30 2022-09-30 一种数据处理方法和数据处理装置
CN202211218227.4 2022-09-30

Publications (1)

Publication Number Publication Date
US20240112405A1 true US20240112405A1 (en) 2024-04-04

Family

ID=84745406

Family Applications (1)

Application Number Title Priority Date Filing Date
US18/232,730 Pending US20240112405A1 (en) 2022-09-30 2023-08-10 Data processing method, electronic device, and stroage medium

Country Status (3)

Country Link
US (1) US20240112405A1 (zh)
CN (1) CN115564932A (zh)
DE (1) DE102023122733A1 (zh)

Also Published As

Publication number Publication date
CN115564932A (zh) 2023-01-03
DE102023122733A1 (de) 2024-04-04

Similar Documents

Publication Publication Date Title
US11100664B2 (en) Depth-aware photo editing
US10694146B2 (en) Video capture systems and methods
US10579142B2 (en) Gaze detection method and apparatus
US10916048B2 (en) Image processing apparatus, image processing method, and storage medium
US11270499B2 (en) Multi line trace gaze to object mapping for determining gaze focus targets
US20210358204A1 (en) Image processing apparatus, image processing method, and storage medium
KR20170031733A (ko) 디스플레이를 위한 캡처된 이미지의 시각을 조정하는 기술들
US20220051469A1 (en) Image processing apparatus, image processing method, and storage medium
US11636572B2 (en) Method and apparatus for determining and varying the panning speed of an image based on saliency
CN112156467A (zh) 虚拟相机的控制方法、系统、存储介质与终端设备
US11127141B2 (en) Image processing apparatus, image processing method, and a non-transitory computer readable storage medium
KR101308184B1 (ko) 윈도우 형태의 증강현실을 제공하는 장치 및 방법
US20240112405A1 (en) Data processing method, electronic device, and stroage medium
EP3639514A1 (en) Method and apparatus for providing information to a user observing a multi view content
CN116912387A (zh) 纹理贴图的处理方法及装置、电子设备、存储介质
US20220230337A1 (en) Information processing apparatus, information processing method, and storage medium
US11688128B2 (en) Multi line trace gaze to object mapping for determining gaze focus targets
US12095964B2 (en) Information processing apparatus, information processing method, and storage medium
US20230291865A1 (en) Image processing apparatus, image processing method, and storage medium
CN117354486A (zh) 一种基于ar眼镜的显示方法、装置及电子设备

Legal Events

Date Code Title Description
AS Assignment

Owner name: LENOVO (BEIJING) LIMITED, CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHANG, GUANNAN;QI, CHAO;SIGNING DATES FROM 20221013 TO 20221016;REEL/FRAME:064556/0755

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION