WO2020238741A1 - Procédé de traitement d'image, dispositif associé et support de stockage informatique - Google Patents

Procédé de traitement d'image, dispositif associé et support de stockage informatique Download PDF

Info

Publication number
WO2020238741A1
WO2020238741A1 PCT/CN2020/091503 CN2020091503W WO2020238741A1 WO 2020238741 A1 WO2020238741 A1 WO 2020238741A1 CN 2020091503 W CN2020091503 W CN 2020091503W WO 2020238741 A1 WO2020238741 A1 WO 2020238741A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
head
electronic device
mounted electronic
moment
Prior art date
Application number
PCT/CN2020/091503
Other languages
English (en)
Chinese (zh)
Inventor
梁天鹰
赖武军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020238741A1 publication Critical patent/WO2020238741A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Definitions

  • the present invention relates to the field of communication technology, in particular to image processing methods, related equipment and computer storage media.
  • AR devices can display virtual images for users while viewing real-world scenes. Users can also interact with virtual images to achieve augmented reality effects.
  • head-mounted electronic devices use the framework shown in FIG. 1 to realize the display of composite images.
  • the head-mounted electronic device collects real images in the real world through the camera, and processes the real image and the current posture information of the head-mounted electronic device through the algorithm processing module to generate virtual images.
  • the real image and the virtual image are synthesized into a composite image to be displayed, and then the composite image is projected onto the display of the head-mounted electronic device for the user to view.
  • the head-mounted electronic device consumes a relatively long time delay from the collection of real images to the display of composite images, and the user experience is not good.
  • the embodiment of the invention discloses an image processing method, related equipment and a computer storage medium, which can solve the problems of large time delay in the existing image display.
  • an embodiment of the present invention discloses an image processing method applied to a head-mounted electronic device.
  • the method includes: the head-mounted electronic device acquires a real image collected by a camera at a first moment, and the real image can be
  • the viewing angle of the real image is usually larger than the viewing angle of the image to be displayed by the head-mounted electronic device.
  • the viewing angle of the image collected by the camera is larger than the viewing angle of the image displayed by the head-mounted electronic device.
  • the head-mounted electronic device can obtain its own posture information at the first moment, which is used to reflect the movement posture of the head-mounted electronic device at the first moment, and the posture information includes but is not limited to the head-mounted electronic device The attitude angle (such as depression angle, elevation angle, etc.), position, movement speed, movement angular speed, or other attitude-related information.
  • the head-mounted electronic device can calculate the re-projection matrix according to the posture information at the first moment, and then use the re-projection matrix to process the real image to obtain the image to be displayed by the head-mounted electronic device at the second moment.
  • the re-projection matrix can be used to reflect the mapping relationship between the image that the head-mounted electronic device needs to display at the first moment and the image that needs to be displayed at the second moment, which is later than the first moment.
  • the problems of large time delay and poor user experience in the prior art can be solved, thereby improving the efficiency of image display and improving user experience.
  • the head-mounted electronic device can predict the posture information at the second time according to the posture information at the first time, and then calculate the posture information at the first time and the second time respectively. Obtain the reprojection matrix.
  • the posture information of the head-mounted electronic device may include movement speed and movement angular velocity.
  • the head-mounted electronic device can calculate the relative rotation angle and relative translation vector of the head-mounted electronic device at the second moment relative to the first moment according to the movement speed and the movement angular velocity at the first moment. Further, the head-mounted electronic device calculates and obtains the reprojection matrix according to the calculated relative rotation angle and relative translation vector.
  • the head-mounted electronic device may calculate a rotation matrix according to the relative rotation angle and the rotation direction of the relative rotation angle, and the rotation matrix is used to indicate that the second moment is relative to the first moment. In terms of the relative rotation of the head-mounted electronic device. Further, the head-mounted electronic device calculates and obtains the reprojection matrix according to the relative translation vector and the rotation matrix.
  • the head-mounted electronic device may perform multi-thread processing on real images to obtain processed images. Further, the head-mounted electronic device can use the re-projection matrix to perform image mapping on the processed image to obtain the image to be displayed on the head-mounted electronic device at the second moment.
  • the head-mounted electronic device performs multi-threaded processing of the real image in specific implementations as follows: the head-mounted electronic device preprocesses the real image in the main thread to obtain the preview For the processed real image, the preprocessing includes, but is not limited to, de-distortion processing, correction processing, translation processing, zoom processing, or other image processing. Further, the head-mounted electronic device can obtain a second image based on the pre-processed real image, for example, perform dimensionality reduction processing on the pre-processed real image.
  • the dimensionality reduction processing may include, but is not limited to, channel number reduction processing (such as single channel Image advancement, etc.), resolution reduction processing (such as image downsampling, etc.) or other processing.
  • the head-mounted electronic device may synthesize the preprocessed real image and the respective virtual image of each slave thread to obtain a processed image.
  • the main thread and any slave threads are independent of each other, and any two slave threads also run independently of each other.
  • the main thread and each slave thread support multi-threaded parallel processing, independent of each other and do not affect each other.
  • the second moment is a time point after a preset duration from the first moment, and the preset duration is that the head-mounted electronic device processes the real image for display on the display screen.
  • the length of time is the length of time that the real image is processed to support the image displayed on the head-mounted electronic device.
  • the embodiments of the present invention disclose a head-mounted electronic device including a functional unit for executing the method of the first aspect.
  • an embodiment of the present invention provides yet another head-mounted electronic device, including a memory, a communication interface, and a processor coupled with the memory and the communication interface; the memory is used for storing instructions, and the processor is used for To execute the instructions, the communication interface is used to communicate with other devices (specifically, user equipment, vehicle-mounted devices, or other network devices, such as servers) under the control of the processor; wherein, the processor executes all The method described in the first aspect is executed when the instruction is described.
  • a computer-readable storage medium stores program codes for image processing.
  • the program code includes instructions for executing the method described in the first aspect above.
  • a computer program product including instructions, which when run on a computer, causes the computer to execute the method described in the first aspect.
  • a chip product is provided to implement the foregoing first aspect or the method in any possible implementation manner of the first aspect.
  • FIG. 1 is a schematic diagram of a processing frame of a head-mounted electronic device provided by the prior art.
  • Fig. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present invention.
  • Fig. 3 is a schematic diagram of an image provided by an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of an image operation provided by an embodiment of the present invention.
  • Fig. 5 is a schematic diagram of a multi-thread operation provided by an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a head-mounted electronic device provided by an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of another head-mounted electronic device provided by an embodiment of the present invention.
  • the head-mounted electronic device adopts the framework shown in FIG. 1 to realize the composite display of the real image and the virtual image, and there will be a large time delay in the whole process.
  • These delays mainly come from the processing of the camera acquisition and imaging, algorithm processing module and image synthesis module, and the processing between each module or device belongs to serial processing.
  • the processing delay of each module/device is accumulated, which will lead to the overall time. The delay is large and the user experience is reduced.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present invention.
  • the method shown in Figure 2 includes the following implementation steps:
  • Step S201 The head-mounted electronic device acquires the real image at the first moment collected by the camera.
  • the real image refers to the image that needs to be displayed and collected by the camera in the real world coordinate system.
  • the head-mounted electronic device when the head-mounted electronic device changes its posture, the head-mounted electronic device can collect a real image in the real world at a first moment (such as time T) to predict the second moment ( For example, the image that needs to be displayed at time T+X), to avoid the large time delay between the image displayed by the head-mounted electronic device at time (T+X) and the real image in the real world, which is easy to cause dizziness and affect user perception Experience.
  • a first moment such as time T
  • T+X the image that needs to be displayed at time T+X
  • the head-mounted electronic device can obtain the real image at the first moment collected by the camera.
  • the head-mounted electronic device can call the camera to collect the real world (ie, real world coordinates) at the first moment. Department) real image.
  • the real world coordinate system is a reference coordinate system located in the real world, and the reference coordinate system is usually set manually or customized by the system.
  • the head-mounted electronic device can send a control signal to the camera to call the camera to collect real images at the first moment.
  • the viewing angle of the real image collected by the camera is generally larger than the viewing angle of the image displayed by the head-mounted electronic device.
  • the viewing angle of the real image may be 120°
  • the viewing angle of the image supported by the head-mounted electronic device may be 45°.
  • Fig. 3 for a schematic diagram of image display.
  • the real image collected by the camera supports covering the entire screen, including the puppies, trees, and basketballs in the illustration.
  • the screen supported by the head-mounted electronic device (shown in the figure) is part of the real image, which can be shown in the dashed box in the figure, including a puppy And trees.
  • Step S202 The head-mounted electronic device obtains the posture information at the first moment, and determines the re-projection matrix according to the posture information at the first moment.
  • the re-projection matrix is used to reflect the mapping relationship between the required display images of the head-mounted electronic device at the first moment and the second moment. The second moment is later than the first moment.
  • Step S203 The head-mounted electronic device processes the real image according to the re-projection matrix to obtain the to-be-displayed image of the head-mounted electronic device at the second moment, so that the head-mounted electronic device can display the to-be-displayed image on the display screen. image.
  • the head-mounted electronic device may collect the posture information at the first moment through the motion sensor.
  • the posture information is information used to describe the motion posture of the head-mounted electronic device in the real world, which may include, but is not limited to, the head The pitch angle, attitude angle, movement speed, movement angular speed or other information related to the attitude of the wearable electronic device.
  • the motion sensor includes, but is not limited to, a speed sensor, a gyroscope, a pressure sensor, or other sensors used to measure the attitude of the device, and this application is not limited.
  • the head-mounted electronic device can predict and obtain the posture information at the second moment based on the posture information at the first moment, and calculate the to-be-displayed display that the head-mounted electronic device needs to display at the second moment based on the posture information at the second moment image. Specifically, after obtaining the posture information of the second moment, the head-mounted electronic device can calculate the relative posture information of the second moment relative to the first moment according to the respective posture information of the first moment and the second moment.
  • the information includes, but is not limited to, the relative rotation angle, the rotation direction of the relative rotation angle, the relative translation vector (including displacement and direction), or other information.
  • the head-mounted electronic device can calculate a re-projection matrix based on the relative posture information, and the re-projection matrix is used to indicate the difference between the image that the head-mounted electronic device needs to display at the first moment and the image that needs to be displayed at the second moment. Mapping relations.
  • the head-mounted electronic device can use the re-projection matrix to perform image mapping processing (also called image distortion) on the real image collected at the first moment to obtain the to-be-displayed image that needs to be displayed at the second moment .
  • image mapping processing also called image distortion
  • the head-mounted electronic device may use the reprojection matrix to perform image mapping processing on the processed image to obtain the image to be displayed at the second moment.
  • the processed image is the image obtained after the head-mounted electronic device performs intermediate processing on the real image collected at the first moment.
  • the specific implementation involved in the intermediate processing can be customized by the system, such as image de-distortion, correction, scaling, and synthesis The details of such operations are described below in this application.
  • the first moment in this application may be recorded as time T, and the second moment may be recorded as time (T+X).
  • X represents the time delay from when the head-mounted electronic device collects the real image to when it is processed into the desired display image.
  • the time delay can be obtained by means of maintenance measurement, for example, by using a dedicated time delay test equipment.
  • the time delay X is mainly composed of the following three parts: the time delay of camera acquisition and imaging (that is, the time delay of real image acquisition and imaging), the delay involved in intermediate processing (such as image mapping, etc.), and the display of the head-mounted electronic device The time delay required for the image.
  • the posture information of the head-mounted electronic device includes movement speed and movement angular velocity.
  • the speed of the head-mounted electronic device at the first moment is The angular velocity of motion is
  • the movement speed and the movement angular velocity here represent vectors in a three-dimensional space, and they have directions and magnitudes.
  • the head-mounted electronic device can use the following formula (1) to calculate the relative rotation angle And relative translation vector
  • X represents the time delay between the second time and the first time (may also be referred to as the duration).
  • the head-mounted electronic device may calculate and obtain the rotation matrix according to the relative rotation angle (which may specifically include the magnitude ⁇ of the relative rotation angle and its direction). For example, the relative rotation angle Bring it into the Rodriguez formula to calculate the rotation matrix R, as shown in the following formula (2).
  • n represents the rotation axis, which can be specified by the relative rotation angle Confirm, which means that the angle of ⁇ is rotated around the rotation axis n.
  • T means transpose.
  • the specific calculation method of the reprojection matrix is not limited.
  • the head-mounted electronic device can comprehensively calculate the reprojection matrix according to parameters such as R, L, and the resolution of the real image (or processed image).
  • the reprojection matrix is used to reflect the mapping relationship between the respective display images of the head-mounted electronic devices at the second moment and the first moment.
  • the head-mounted electronic device can use the re-projection matrix to map the processed image (for example, the image after intermediate processing on the real image at the first moment) into the image to be displayed at the second moment, so as The image to be displayed is displayed on the display screen of the head-mounted electronic device for the user to watch.
  • FIG. 4 shows a schematic diagram of an image display effect of a head-mounted electronic device.
  • the image displayed by the head-mounted electronic device at the first time T is the first image
  • the image displayed at the second time (T+X) is the second image (ie, the image to be displayed).
  • the first image includes pictures including puppies and trees
  • the second image includes pictures including data and basketballs.
  • the head-mounted electronic device can collect a real image at time T through a camera. The real image is shown in the figure, including pictures of puppies, trees, and basketballs.
  • the head-mounted electronic device collects its own motion data (ie posture information) at time T through the motion sensor, and then predicts the real image based on the posture information at the first time to obtain the image to be displayed that needs to be displayed at time (T+X) (Ie the second image).
  • T+X time
  • the head-mounted electronic device can process the real image, such as zooming, rotating, shifting, or distorting, to obtain the T+X time according to the predicted viewing angle and picture range (reflected in the reprojection matrix).
  • the second image that needs to be displayed is not repeated here.
  • the head-mounted electronic device may process the real image at the first moment in parallel in a multi-threaded manner to obtain the processed image, so that the subsequent head-mounted electronic device predicts and obtains the image to be displayed at the second moment based on the processed image.
  • the use of a multi-threaded parallel image processing solution can avoid the problem of large time delay consumed by serial processing, reduce the time delay consumed by image intermediate processing, and improve image processing efficiency.
  • the head-mounted electronic device may preprocess the real image at the first moment collected by the camera to obtain the preprocessed real image.
  • the pre-processing may be image processing customized by the system, which may include, but is not limited to, image correction, image scaling, image translation, and image de-distortion.
  • the head-mounted electronic device in one or more slave threads may respectively perform virtual synthesis on the intermediate images to obtain a virtual image corresponding to each slave thread.
  • the intermediate image is obtained based on the pre-processed real image, for example, the pre-processed real image can be obtained by dimensionality reduction processing.
  • the dimensionality reduction processing includes but is not limited to channel number reduction processing, resolution reduction processing, and so on.
  • the head-mounted electronic device can perform single-channel data extraction on the preprocessed real image to convert the RGB image into a black and white (single-channel) image, and furthermore, the black and white image Downsampling is performed to reduce the resolution of the black and white image to obtain a processed intermediate image.
  • the head-mounted electronic device may store the intermediate image in a preset data buffer area for use by other threads.
  • the amount of calculation data of other threads can be reduced, and the calculation complexity and power consumption of other threads can be reduced.
  • any two threads in multithreading are independent of each other and do not affect each other.
  • the main thread and any slave thread, or any two slave threads run independently of each other, which can improve image processing efficiency and avoid serial processing. A lot of time delay consumption.
  • the head-mounted electronic device can use a preset image synthesis algorithm, such as SLAM algorithm, etc., and optionally can also combine data collected by other sensors (for example, a sensor for collecting data related to virtual objects to be synthesized) to perform an intermediate image
  • Virtual synthesis is to add a virtual object displayed for the user in the intermediate image to obtain a synthesized virtual image.
  • the head-mounted electronic device may store the synthesized virtual image in the corresponding data buffer area.
  • the head-mounted electronic device can also convert the preprocessed real image and the virtual image of each slave thread.
  • the images are synthesized to obtain processed images.
  • a locking mechanism can be added between multiple threads to ensure the security of data access.
  • the data can be protected by a locking mechanism to prevent other threads from accessing the data, that is, only the current thread access is run at this time. After the current thread uses and releases the data, you can Notify other threads to access to avoid data tampering, data inconsistency, or data pollution when multiple threads access at the same time.
  • the intermediate image is usually locked using a locking mechanism. At this time, other slave threads or the main thread cannot support access to the intermediate image. Only when the slave thread unlocks the intermediate image and uses the intermediate image, other threads can be notified to access or use the intermediate image.
  • Fig. 5 shows the parallel processing of (X+1) threads to obtain the to-be-displayed image of the head-mounted electronic device at the second time (T+X).
  • thread 0 represents the main thread.
  • the head-mounted electronic device can obtain the Nth frame image collected by the camera at the first moment (time T), and further preprocess the Nth frame image to obtain The Nth frame image after preprocessing.
  • the head-mounted electronic device may perform dimensionality reduction processing on the preprocessed Nth frame image, such as extracting single-channel data, down-sampling, etc., to store the processed image in a data buffer.
  • thread 1 to thread X are all slave threads.
  • the processing flow of each slave thread is similar.
  • the head-mounted electronic device can obtain the N+1 frame image from the data buffer area, and then perform virtual synthesis on the N+1 frame image, as shown in the figure
  • the SLAM algorithm is used for processing to generate corresponding virtual images.
  • the N+1th frame image is the image that the head-mounted electronic device needs to display at the T+1th time, and it may be part or all of the image after the dimensionality reduction process, and it is not limited.
  • the head-mounted electronic device may store the virtual image in the data buffer area for subsequent use.
  • the head-mounted electronic device After the head-mounted electronic device obtains the respective virtual image of each slave thread, it combines the preprocessed Nth frame image of the main thread to synthesize them into a processed image. It is convenient for the subsequent head-mounted electronic device to perform image distortion on the processed image to obtain the to-be-displayed image of the head-mounted electronic device at time (T+X). For details of image distortion, please refer to the relevant description in the foregoing embodiment, which will not be repeated here.
  • the head-mounted electronic device is a type of electronic device, and this application only uses the head-mounted electronic device as an example for illustration, but it does not constitute a limitation, and may also be other electronic devices. Users can wear head-mounted electronic devices to achieve different effects such as virtual reality (VR), AR, and mixed reality (MR).
  • the head-mounted electronic device may be glasses, head-mounted electronic device, goggles, and the like.
  • the electronic device may also be other devices including a display screen, such as an autonomous vehicle including a display screen.
  • FIG. 6 is a schematic structural diagram of a head-mounted electronic device according to an embodiment of the present invention.
  • the head-mounted electronic device 600 shown in FIG. 6 includes a communication unit 602 and a processing unit 604. among them,
  • the communication unit 602 is configured to obtain a real image collected by the camera at the first moment, and the real image is an image that needs to be displayed collected by the camera in the real world coordinate system;
  • the processing unit 604 is configured to determine a re-projection matrix according to the posture information of the head-mounted electronic device at the first moment, and the re-projection matrix is used to reflect that the image to be displayed by the head-mounted electronic device is from the first moment.
  • the processing unit 604 is further configured to process the real image according to the reprojection matrix to obtain the image to be displayed by the head-mounted electronic device at the second moment.
  • the processing unit 604 is specifically configured to predict the posture information at the second time according to the posture information of the head-mounted electronic device at the first time; Calculate the reprojection matrix for the posture information of and the posture information of the second moment.
  • the posture information includes the movement speed and the movement angular velocity of the head-mounted electronic device
  • the processing unit 604 is specifically configured to perform according to the head-mounted electronic device at the first moment.
  • the movement speed and the movement angular velocity, the relative rotation angle and the relative translation vector of the head-mounted electronic device at the second moment relative to the first moment are calculated; according to the relative rotation angle and the relative translation vector, the calculated Describe the reprojection matrix.
  • the processing unit 604 is specifically configured to perform multi-thread processing on the real image to obtain a processed image; and perform image mapping on the processed image according to the reprojection matrix to obtain the processed image.
  • the processing unit 604 is further specifically configured to preprocess the real image in the main thread to obtain the preprocessed real image; in at least one slave thread, the second image is respectively processed Perform virtual synthesis to obtain a virtual image corresponding to each of the slave threads, and the second image is obtained based on the preprocessed real image; and the preprocessed real image and the at least one slave thread are separately The corresponding virtual images are synthesized to obtain the processed image, wherein both the main thread and the at least one slave thread support multi-thread parallel processing.
  • the head-mounted electronic device 600 may further include a storage unit 606 for storing program codes and data of the head-mounted electronic device 600, for example, storing program codes for image processing.
  • the processing unit 604 is configured to call the program code in the storage unit 606 to implement the content in the method embodiment described in FIG. 2 above.
  • each module or unit involved in the head-mounted electronic device 600 of the embodiment of the present invention may be specifically implemented by software programs or hardware.
  • the modules or units involved in the head-mounted electronic device 600 are software modules or software units.
  • the modules or units involved in the head-mounted electronic device 600 can be integrated through dedicated Circuit (application-specific integrated circuit, ASIC) implementation, or programmable logic device (programmable logic device, PLD) implementation, the above-mentioned PLD can be a complex programmable logic device (CPLD), field programmable gate array (field programmable gate array) -programmable gate array (FPGA), general array logic (generic array logic, GAL) or any combination thereof, the present invention is not limited.
  • CPLD complex programmable logic device
  • FPGA field programmable gate array
  • GAL general array logic
  • FIG. 6 is only a possible implementation of the embodiment of the present application.
  • the head-mounted electronic device may also include more or fewer components, which is not limited here.
  • FIG. 7 is a schematic structural diagram of another head-mounted electronic device according to an embodiment of the present invention.
  • the head-mounted electronic device 100 may include a processor 110, a memory 120, a sensor module 130, a microphone 140, buttons 150, an input and output interface 160, a communication module 170, a camera 180, a battery 190, a display screen 1100, and so on.
  • the sensor module 130 may include a motion sensor 131 for detecting and obtaining posture information of the head-mounted electronic device.
  • the sensor module 130 may also include other sensors, such as a sound detector, a proximity light sensor, a distance sensor, a focal length detection optical sensor, an ambient light sensor, an acceleration sensor, and a temperature sensor.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the head-mounted electronic device 100.
  • the head-mounted electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), video processing unit (VPU) controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processing Neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • VPU video processing unit
  • memory video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural network processing Neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the head-mounted electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (I2C) interface, universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general input and output (general -purpose input/output, GPIO) interface, subscriber identity module (SIM) interface, and/or universal serial bus (universal serial bus, USB) interface, serial peripheral interface (serial peripheral interface, SPI) Interface etc.
  • I2C integrated circuit
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • SIM subscriber identity module
  • USB universal serial bus
  • serial peripheral interface serial peripheral interface
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be respectively coupled to the motion sensor 131, the battery 190, the camera 180, etc. through different I2C bus interfaces.
  • the processor 110 may couple the motion sensor 131 through an I2C interface, so that the processor 110 and the motion sensor 131 communicate through the I2C bus interface to obtain posture information (ie, motion data) of the head-mounted electronic device.
  • the SPI interface can be used for the connection between the processor and the sensor.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the communication module 170.
  • the processor 110 communicates with the Bluetooth module in the communication module 170 through the UART interface to implement the Bluetooth function.
  • the MIPI interface can be used to connect the processor 110 with the display screen 1100, the camera 180 and other peripheral devices.
  • the MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 and the camera 180 communicate through a CSI interface to realize the shooting function of the head-mounted electronic device 100.
  • the processor 110 and the display screen 1100 communicate through a DSI interface to realize the display function of the head-mounted electronic device 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 180, the display screen 1100, the communication module 170, the sensor module 130, the microphone 140, and so on.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the head-mounted electronic device 100, and can also be used to transfer data between the head-mounted electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. This interface can also be used to connect other electronic devices, such as mobile phones.
  • the USB interface can be USB3.0, which is compatible with high-speed display port (DP) signal transmission, and can transmit high-speed video and audio data.
  • DP display port
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely illustrative, and does not constitute a structural limitation of the head-mounted electronic device 100.
  • the head-mounted electronic device 100 may also adopt different interface connection modes in the above-mentioned embodiments, or a combination of multiple interface connection modes.
  • the head-mounted electronic device 100 may include a wireless communication function.
  • the communication module 170 may include a wireless communication module and a mobile communication module.
  • the wireless communication function can be realized by an antenna (not shown), a mobile communication module (not shown), a modem processor (not shown), a baseband processor (not shown), and the like.
  • the antenna is used to transmit and receive electromagnetic wave signals.
  • the head-mounted electronic device 100 may include multiple antennas, and each antenna may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the head-mounted electronic device 100.
  • the mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module can receive electromagnetic waves by the antenna, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation by the antenna.
  • at least part of the functional modules of the mobile communication module may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs sound signals through audio equipment (not limited to speakers, etc.), or displays images or videos through the display screen 1100.
  • the modem processor may be an independent device. In other embodiments, the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module or other functional modules.
  • the wireless communication module can provide wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), and global navigation that are used on the head-mounted electronic device 100.
  • WLAN wireless local area networks
  • WiFi wireless fidelity
  • BT Bluetooth
  • global navigation that are used on the head-mounted electronic device 100.
  • Satellite system global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module may be one or more devices integrating at least one communication processing module.
  • the wireless communication module receives electromagnetic waves via an antenna, modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module may also receive the signal to be sent from the processor 110, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna.
  • the antenna of the head mounted electronic device 100 is coupled with the mobile communication module, so that the head mounted electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the head-mounted electronic device 100 implements a display function through a GPU, a display screen 1100, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display screen 1100 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the number of display screens 1100 in the head-mounted electronic device 100 may be two, corresponding to the two eyeballs of the user 200 respectively.
  • the content displayed on the two displays can be displayed independently. Different images can be displayed on the two displays to improve the three-dimensional sense of the image.
  • the number of the display screen 1100 in the head-mounted electronic device 100 may also be one to correspond to the two eyeballs of the user 200.
  • the head-mounted electronic device 100 can implement a shooting function through an ISP, a camera 180, a video codec, a GPU, a display screen 1100, and an application processor.
  • the ISP is used to process the data fed back by the camera 180. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 180.
  • the camera 180 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats.
  • the head-mounted electronic device 100 may include 1 or N cameras 180, and N is a positive integer greater than 1.
  • the camera 180 can be installed on the side of the head-mounted electronic device 100, and can also be installed at a position between two display screens on the head-mounted electronic device 100.
  • the camera 180 is used to capture images and videos within the viewing angle of the user 200 in real time.
  • the head-mounted electronic device 100 generates a virtual image according to the captured real-time image and video, and displays the virtual image on the display screen 1100.
  • the processor 110 may determine the virtual image displayed on the display screen 1100 according to the still image or video image captured by the camera 180, combined with the data (such as brightness, sound, etc.) acquired by the sensor module 130, to achieve superimposition on real world objects Attach a virtual image.
  • the digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals.
  • the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the head mounted electronic device 100 may support one or more video codecs.
  • the head-mounted electronic device 100 can play or record videos in a variety of encoding formats, for example: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the head-mounted electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, etc.
  • the memory 120 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the head-mounted electronic device 100 by running instructions stored in the memory 120.
  • the processor 110 may call instructions stored in the memory 120 to execute all or part of the steps in the method embodiment described in FIG. 2 above.
  • the memory 120 may include a program storage area and a data storage area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the head-mounted electronic device 100.
  • the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • UFS universal flash storage
  • the head-mounted electronic device 100 can implement audio functions through an audio module, a speaker, a microphone 140, a headphone interface, and an application processor. For example, music playback, recording, etc.
  • the audio module is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input into digital audio signal.
  • the audio module can also be used to encode and decode audio signals.
  • the audio module may be provided in the processor 110, or some functional modules of the audio module may be provided in the processor 110.
  • Loudspeakers also called “horns" are used to convert audio electrical signals into sound signals.
  • the head-mounted electronic device 100 can listen to music through a speaker, or listen to a hands-free call.
  • the microphone 140 also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the head-mounted electronic device 100 may be provided with at least one microphone 140. In some other embodiments, the head-mounted electronic device 100 may be provided with two microphones 140, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the head-mounted electronic device 100 may also be provided with three, four or more microphones 140 to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the head-mounted electronic device 100 may include a sound detector 132 that can detect and process a voice signal for controlling the portable electronic device.
  • the sound detector may include a microphone 140.
  • the head-mounted electronic device 100 can use the microphone 140 to convert sound into electrical signals.
  • the sound detector 132 can then process the electrical signal and recognize the signal as a command of the head-mounted display system 1300.
  • the processor 110 may be configured to receive a voice signal from the microphone 140. After receiving the voice signal, the processor 110 may run the sound detector 132 to recognize the voice command. For example, when a voice command is received, the head-mounted electronic device 110 can obtain a contact on the stored user contact list, and the head-mounted electronic device 100 can automatically dial the phone number of the contact.
  • the headphone jack is used to connect wired headphones.
  • the headphone interface can be a USB interface, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA, CTIA
  • the head-mounted electronic device 100 may include one or more buttons 150, which can control the head-mounted electronic device and provide the user with access to functions on the head-mounted electronic device 100.
  • the key 150 may be in the form of a button, switch, dial, and touch or proximity sensing device (such as a touch sensor).
  • the user 20 can turn on the display screen 1100 of the head-mounted electronic device 100 by pressing a button.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the head mounted electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the head mounted electronic device 100.
  • the head-mounted electronic device 100 may include an input-output interface 160, and the input-output interface 160 may connect other devices to the head-mounted electronic device 100 through appropriate components.
  • Components may include audio/video jacks, data connectors, etc., for example.
  • Sound detectors can detect and process voice signals used to control portable electronic devices.
  • the head-mounted electronic device 100 can implement eye tracking.
  • infrared devices such as infrared transmitters
  • image acquisition devices such as cameras
  • the proximity light sensor may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the head mounted electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the head mounted electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the head mounted electronic device 100. When insufficient reflected light is detected, the head-mounted electronic device 100 can determine that there is no object near the head-mounted electronic device 100.
  • the head-mounted electronic device 100 may use the proximity light sensor to detect a gesture operation at a specific position of the head-mounted electronic device 100 to achieve the purpose of associating the gesture operation with an operation command.
  • the head mounted electronic device 100 can measure the distance by infrared or laser. In some embodiments, the head-mounted electronic device 100 may use a distance sensor to measure distances to achieve fast focusing.
  • the ambient light sensor is used to sense the brightness of the ambient light.
  • the head mounted electronic device 100 can adaptively adjust the brightness of the display screen 1100 according to the perceived brightness of the ambient light.
  • the ambient light sensor can also be used to automatically adjust the white balance when taking pictures.
  • the acceleration sensor can detect the magnitude of acceleration of the head-mounted electronic device 100 in various directions (generally three-axis). When the head-mounted electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of head-mounted electronic devices and applied to applications such as pedometers.
  • the temperature sensor is used to detect temperature.
  • the head-mounted electronic device 100 uses the temperature detected by the temperature sensor to execute the temperature processing strategy. For example, when the temperature reported by the temperature sensor exceeds the threshold, the head-mounted electronic device 100 reduces the performance of the processor located near the temperature sensor, so as to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the head-mounted electronic device 100 heats the battery 190 to prevent the head-mounted electronic device 100 from shutting down abnormally due to low temperature. In some other embodiments, when the temperature is lower than another threshold, the head mounted electronic device 100 boosts the output voltage of the battery 190 to avoid abnormal shutdown caused by low temperature.
  • the focal length detection optical sensor is used to detect the focal length of the eyeball of the user 200.
  • the head-mounted electronic device 100 may further include an infrared light source 1200.
  • the focal length detection optical sensor may cooperate with the infrared light source 1200 to detect the focal length of the eyeball of the user 200.
  • the focal length detection optical sensor 132 and the infrared light source 1200 may be arranged on the side of the display screen close to the eyeball.
  • the number of the focal length detecting optical sensor 132 and the infrared light source 1200 may both be two, and each eyeball may correspond to a focal length detecting optical sensor 132 and the infrared light source 1200 for detecting the focal length of the eyeball.
  • the positions and numbers of the focal length detection optical sensor 132, the infrared light source 1200, and the camera 180 on the head mounted electronic device 100 shown in FIG. 7 are only for explaining the embodiment of the present application and should not constitute a limitation.
  • the number of the focal length detecting optical sensor 132 and the infrared light source 1200 may also be one.
  • the number of one focal length detection optical sensor 132 and one infrared light source 1200 can be used to detect the focal length of one eyeball, or detect the focal length of two eyeballs at the same time.
  • the motion sensor 131 may be used to determine the motion posture of the head-mounted electronic device 100.
  • the angular velocity of the head-mounted electronic device 100 around three axes i.e., x, y, and z axes
  • Motion sensors can also be used for navigation, somatosensory game scenes.
  • the motion sensor includes, but is not limited to, a gyroscope sensor, an angular velocity sensor, or a speed sensor.
  • the display screen 1100 is used to display images, videos, etc.
  • the display screen 1100 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the embodiment of the present invention also provides a computer non-transitory storage medium.
  • the computer non-transitory storage medium stores instructions. When it runs on a processor, the method flow shown in FIG. 2 is implemented.
  • the embodiment of the present invention also provides a computer program product.
  • the computer program product runs on a processor, the method flow shown in FIG. 2 is realized.
  • the steps of the method or algorithm described in combination with the disclosure of the embodiment of the present invention may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
  • Software instructions can be composed of corresponding software modules, which can be stored in random access memory (English: Random Access Memory, RAM), flash memory, read-only memory (English: Read Only Memory, ROM), erasable and programmable Read-only memory (English: Erasable Programmable ROM, EPROM), electrically erasable programmable read-only memory (English: EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM, or well-known in the art Any other form of storage medium.
  • An exemplary storage medium is coupled to the processor, so that the processor can read information from the storage medium and can write information to the storage medium.
  • the storage medium may also be an integral part of the processor.
  • the processor and the storage medium may be located in the ASIC.
  • the ASIC may be located in a head-mounted electronic device.
  • the processor and the storage medium may also exist as discrete components in the head-mounted electronic device.
  • the program can be stored in a computer readable storage medium. When executed, it may include the processes of the above-mentioned method embodiments.
  • the aforementioned storage media include: ROM, RAM, magnetic disks or optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

L'invention concerne un procédé de traitement d'image, un dispositif associé et un support de stockage informatique, le procédé comprenant les étapes suivantes : un dispositif électronique porté sur la tête obtient une image réelle acquise par une caméra à un premier moment (S201), il détermine une matrice de reprojection selon les informations d'attitude au premier moment (S202), puis il traite l'image réelle en utilisant la matrice de reprojection, pour obtenir une image à afficher du dispositif électronique porté sur la tête à un deuxième moment (S203). En utilisant le procédé mentionné ci-dessus, les problèmes de retard temporel supérieur, d'expérience d'utilisateur inférieure et similaires dans les solutions d'affichage d'image existantes peuvent être résolus.
PCT/CN2020/091503 2019-05-24 2020-05-21 Procédé de traitement d'image, dispositif associé et support de stockage informatique WO2020238741A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910443434.1A CN110244840A (zh) 2019-05-24 2019-05-24 图像处理方法、相关设备及计算机存储介质
CN201910443434.1 2019-05-24

Publications (1)

Publication Number Publication Date
WO2020238741A1 true WO2020238741A1 (fr) 2020-12-03

Family

ID=67885069

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/091503 WO2020238741A1 (fr) 2019-05-24 2020-05-21 Procédé de traitement d'image, dispositif associé et support de stockage informatique

Country Status (2)

Country Link
CN (1) CN110244840A (fr)
WO (1) WO2020238741A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4254139A1 (fr) * 2022-03-30 2023-10-04 Holo-Light GmbH Procédé de reprojection pour générer des données d'image reprojetées, système de projection xr et module d'apprentissage automatique

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110244840A (zh) * 2019-05-24 2019-09-17 华为技术有限公司 图像处理方法、相关设备及计算机存储介质
CN111027374B (zh) * 2019-10-28 2023-06-30 华为终端有限公司 一种图像识别方法及电子设备
CN112752119B (zh) * 2019-10-31 2023-12-01 中兴通讯股份有限公司 一种时延误差校正方法、终端设备、服务器及存储介质
CN111243027B (zh) * 2020-02-28 2023-06-23 京东方科技集团股份有限公司 延时测量方法、装置及系统
CN113589919A (zh) * 2020-04-30 2021-11-02 华为技术有限公司 图像处理的方法和装置
CN111736692B (zh) * 2020-06-01 2023-01-31 Oppo广东移动通信有限公司 显示方法、显示装置、存储介质与头戴式设备
CN114071197B (zh) * 2020-07-30 2024-04-12 华为技术有限公司 投屏数据处理方法和装置
CN112132108A (zh) * 2020-10-09 2020-12-25 安徽江淮汽车集团股份有限公司 地面点云数据的提取方法、装置、设备及存储介质
CN114449251B (zh) * 2020-10-31 2024-01-16 华为技术有限公司 视频透视方法、装置、系统、电子设备及存储介质
CN112380989B (zh) * 2020-11-13 2023-01-24 歌尔科技有限公司 一种头戴显示设备及其数据获取方法、装置和主机
CN112486318A (zh) * 2020-11-26 2021-03-12 北京字跳网络技术有限公司 图像显示方法、装置、可读介质及电子设备
WO2022241701A1 (fr) * 2021-05-20 2022-11-24 华为技术有限公司 Procédé et dispositif de traitement d'images
CN114640838B (zh) * 2022-03-15 2023-08-25 北京奇艺世纪科技有限公司 画面合成方法、装置、电子设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847785A (zh) * 2016-05-09 2016-08-10 上海乐相科技有限公司 一种图像处理方法、设备和系统
WO2016153603A1 (fr) * 2015-03-23 2016-09-29 Intel Corporation Facilitation de la représentation virtuelle en trois vraies dimensions d'objets réels au moyen de formes tridimensionnelles dynamiques
CN106502427A (zh) * 2016-12-15 2017-03-15 北京国承万通信息科技有限公司 虚拟现实系统及其场景呈现方法
CN106998409A (zh) * 2017-03-21 2017-08-01 华为技术有限公司 一种图像处理方法、头戴显示器以及渲染设备
CN110244840A (zh) * 2019-05-24 2019-09-17 华为技术有限公司 图像处理方法、相关设备及计算机存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9514571B2 (en) * 2013-07-25 2016-12-06 Microsoft Technology Licensing, Llc Late stage reprojection
CN104202547B (zh) * 2014-08-27 2017-10-10 广东威创视讯科技股份有限公司 投影画面中提取目标物体的方法、投影互动方法及其系统
US10962780B2 (en) * 2015-10-26 2021-03-30 Microsoft Technology Licensing, Llc Remote rendering for virtual images
US9978180B2 (en) * 2016-01-25 2018-05-22 Microsoft Technology Licensing, Llc Frame projection for augmented reality environments
CN105892658B (zh) * 2016-03-30 2019-07-23 华为技术有限公司 基于头戴显示设备预测头部姿态的方法和头戴显示设备
CN109656367A (zh) * 2018-12-24 2019-04-19 深圳超多维科技有限公司 一种应用于vr场景下的图像处理方法、装置及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016153603A1 (fr) * 2015-03-23 2016-09-29 Intel Corporation Facilitation de la représentation virtuelle en trois vraies dimensions d'objets réels au moyen de formes tridimensionnelles dynamiques
CN105847785A (zh) * 2016-05-09 2016-08-10 上海乐相科技有限公司 一种图像处理方法、设备和系统
CN106502427A (zh) * 2016-12-15 2017-03-15 北京国承万通信息科技有限公司 虚拟现实系统及其场景呈现方法
CN106998409A (zh) * 2017-03-21 2017-08-01 华为技术有限公司 一种图像处理方法、头戴显示器以及渲染设备
CN110244840A (zh) * 2019-05-24 2019-09-17 华为技术有限公司 图像处理方法、相关设备及计算机存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4254139A1 (fr) * 2022-03-30 2023-10-04 Holo-Light GmbH Procédé de reprojection pour générer des données d'image reprojetées, système de projection xr et module d'apprentissage automatique

Also Published As

Publication number Publication date
CN110244840A (zh) 2019-09-17

Similar Documents

Publication Publication Date Title
WO2020238741A1 (fr) Procédé de traitement d'image, dispositif associé et support de stockage informatique
WO2020192461A1 (fr) Procédé d'enregistrement pour la photographie à intervalle, et dispositif électronique
WO2020192458A1 (fr) Procédé d'affichage d'image et dispositif de visiocasque
US11759143B2 (en) Skin detection method and electronic device
CN113810601B (zh) 终端的图像处理方法、装置和终端设备
WO2022262313A1 (fr) Procédé de traitement d'image à base d'incrustation d'image, dispositif, support de stockage, et produit de programme
CN111179282A (zh) 图像处理方法、图像处理装置、存储介质与电子设备
WO2022017261A1 (fr) Procédé de synthèse d'image et dispositif électronique
TWI818211B (zh) 眼部定位裝置、方法及3d顯示裝置、方法
WO2022100685A1 (fr) Procédé de traitement de commande de dessin et dispositif associé
WO2021077911A1 (fr) Procédé et appareil de traitement d'inondation d'images et support de stockage
CN112954251B (zh) 视频处理方法、视频处理装置、存储介质与电子设备
WO2021208926A1 (fr) Dispositif et procédé de photographie
CN114489533A (zh) 投屏方法、装置、电子设备及计算机可读存储介质
CN115526787B (zh) 视频处理方法和装置
WO2021057626A1 (fr) Procédé de traitement d'image, appareil, dispositif et support de stockage informatique
CN114257920B (zh) 一种音频播放方法、系统和电子设备
CN112188094B (zh) 图像处理方法及装置、计算机可读介质及终端设备
CN113850709A (zh) 图像变换方法和装置
WO2022033344A1 (fr) Procédé de stabilisation vidéo, dispositif de terminal et support de stockage lisible par ordinateur
CN115631250B (zh) 图像处理方法与电子设备
CN115150542B (zh) 一种视频防抖方法及相关设备
CN111626931B (zh) 图像处理方法、图像处理装置、存储介质与电子设备
WO2021164387A1 (fr) Procédé et appareil d'avertissement précoce pour objet cible, et dispositif électronique
CN115706869A (zh) 终端的图像处理方法、装置和终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20812957

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20812957

Country of ref document: EP

Kind code of ref document: A1