CN107743223A - Image write-in control method, device and electronic equipment - Google Patents

Image write-in control method, device and electronic equipment Download PDF

Info

Publication number
CN107743223A
CN107743223A CN201711047759.5A CN201711047759A CN107743223A CN 107743223 A CN107743223 A CN 107743223A CN 201711047759 A CN201711047759 A CN 201711047759A CN 107743223 A CN107743223 A CN 107743223A
Authority
CN
China
Prior art keywords
reading
image
state
nth frame
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711047759.5A
Other languages
Chinese (zh)
Inventor
王明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN201711047759.5A priority Critical patent/CN107743223A/en
Priority to PCT/CN2017/113567 priority patent/WO2019085109A1/en
Publication of CN107743223A publication Critical patent/CN107743223A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the present invention provides a kind of image write-in control method, device and electronic equipment, and this method includes:Display module can trigger read operation to nth frame image in buffer queue and produce first reading state simultaneously.Then, head-mounted display apparatus is in response to this first reading state, so as to obtain the triggered time corresponding to the first reading state, then time variable value corresponding to the first reading state is updated to this triggered time.Determine whether wear-type virtual reality device has finished nth frame image reading according to the precedence relationship between time variable corresponding to time variable corresponding to the first reading state of renewal and the second current reading state.When the read operation of image has been completed, wear-type virtual reality device can just start to write N+1 two field pictures in buffer queue, so can with ensure the reading of image and write-in be all respectively completed, it is not in the situation of read/write conflict, picture tear is avoided the occurrence of, improves the quality that image is shown.

Description

Image write-in control method, device and electronic equipment
Technical field
The present invention relates to technical field of data processing, more particularly to a kind of image write-in control method, device and electronics to set It is standby.
Background technology
In recent years, virtual reality technology was widely used in various fields, for example, building, medical treatment, matchmaker Body, video display etc..User can reach sensation on the spot in person using virtual reality device viewing image.
In the prior art, a buffer queue would generally be set in virtual reality device, be installed on virtual reality device Application program image to be displayed can be written in this buffer queue.Then, display module can be from this in virtual reality device Image to be displayed is read in buffer queue, and most this image to be displayed is rendered on the screen of virtual reality device at last.But by In the write-in and reading of image to be displayed be to be controlled by separate control signal, such as by image write signal control treat The write-in of display image, the reading of image to be displayed is controlled by image reading signal, wherein, this image reading signal can be vertical Straight synchronizing signal (V-SYNC signals).Therefore, image to be displayed write-in and reading be easy to clash, this will make The picture shown on screen produces tear, seriously reduces the quality that image is shown.
The content of the invention
In view of this, the embodiment of the present invention provides a kind of image write-in control method, device and electronic equipment, passes through control The time of image write-in avoids the write-in of image and reading from clashing, and so as to avoid the occurrence of picture tear, improves image and shows Quality.
The embodiment of the present invention provides a kind of image write-in control method, including:
First reading state caused by the read operation triggered in response to display module to nth frame image in buffer queue, It is the triggered time corresponding to first reading state to update time variable value corresponding to first reading state;
According to time variable corresponding to time variable value corresponding to first reading state and current second reading state The time order and function relation of value, determine whether the nth frame image reads and finish;
If the nth frame image reading finishes, N+1 two field pictures are write in the buffer queue.
Alternatively, if first reading state is reads beginning state, second reading state terminates for reading State;If first reading state is reads done state, second reading state is reading beginning state.
Alternatively, the time variable value according to corresponding to first reading state and current second reading state are corresponding Time variable value time order and function relation, determine whether the nth frame image reads and finish, including:
If time variable value corresponding to the reading beginning state is less than time variable corresponding to the reading done state Value, it is determined that the nth frame image reading finishes.
Alternatively, the buffer queue includes being used to store the first space of left-eye image and for storing eye image Second space, the nth frame image are nth frame left-eye image or nth frame eye image;
It is described to write N+1 two field pictures in the buffer queue, including:
If the nth frame image is nth frame left-eye image, N+1 frames left-eye image is write into the buffer queue In first space;
If the nth frame image is nth frame eye image, N+1 frames eye image is write into the buffer queue In second space.
Alternatively, before in the buffer queue by the write-in of N+1 two field pictures, in addition to:
Obtain the attitude data of user;
Determined to be located at user's field range according to the position of each object in the attitude data and virtual reality scenario Interior object;
Generation is located at the user corresponding N+1 frames left-eye image of object and the N+1 within sweep of the eye with described Frame eye image.
The embodiment of the present invention provides a kind of image write-in control device, including:
Update module, in response to display module in buffer queue nth frame image trigger read operation caused by First reading state, it is to be triggered corresponding to first reading state to update time variable value corresponding to first reading state Time;
State determining module, shape is read for the time variable value according to corresponding to first reading state and current second The time order and function relation of time variable value corresponding to state, determine whether the nth frame image reads and finish;
Writing module, if being finished for the nth frame image reading, N+1 two field pictures are write into the buffer queue In.
The embodiment of the present invention provides a kind of electronic equipment, including:Memory, and the processing being connected with the memory Device;
The memory, for storing one or more computer instruction, wherein, one or more computer instruction Call and perform for the processor;
The processor, for performing one or more computer instruction with above-mentioned image write-in control method Any one method.
Image write-in control method, device and electronic equipment provided in an embodiment of the present invention, wear-type virtual reality device Display module can in buffer queue nth frame image trigger read operation and simultaneously produce first reading state.Then, Head-mounted display apparatus is in response to the first reading state caused by this read operation, so as to obtain the triggered time of read operation It is the triggered time corresponding to the first reading state, recycles triggered time renewal first corresponding to the first reading state to read shape Time variable value corresponding to state.According to time variable corresponding to the first reading state of renewal and the second current reading state Precedence relationship between corresponding time variable determines whether wear-type virtual reality device is complete by nth frame image reading Finish.That is to say write operation of the wear-type virtual reality device to image is controlled by the performance of image reading.If figure The read operation of picture has been completed, then wear-type virtual reality device can just start to write N+1 two field pictures in buffer queue; If the read operation of image is not completed, wear-type virtual reality device can be waited for, and image not write Operation.It can so be not in the situation of read/write conflict, avoid to ensure that the reading of image and write-in are all respectively completed There is picture tear, improve the quality that image is shown.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are this hairs Some bright embodiments, for those of ordinary skill in the art, on the premise of not paying creative work, can be with root Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the flow chart of image write-in control method embodiment one provided in an embodiment of the present invention;
Fig. 2 is the flow chart of image write-in control method embodiment two provided in an embodiment of the present invention;
Fig. 3 is the structural representation of image write-in control device embodiment one provided in an embodiment of the present invention;
Fig. 4 is the structural representation of image write-in control device embodiment two provided in an embodiment of the present invention;
Fig. 5 is the structural representation of electronic equipment embodiment one provided in an embodiment of the present invention;
Fig. 6 is the inside configuration structure schematic diagram of wear-type virtual reality device provided in an embodiment of the present invention.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is Part of the embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
The term used in embodiments of the present invention is only merely for the purpose of description specific embodiment, and is not intended to be limiting The present invention." one kind ", " described " and "the" of singulative used in the embodiment of the present invention and appended claims It is also intended to including most forms, unless context clearly shows that other implications, " a variety of " generally comprise at least two, but not Exclusion includes at least one situation.
It should be appreciated that term "and/or" used herein is only a kind of incidence relation for describing affiliated partner, represent There may be three kinds of relations, for example, A and/or B, can be represented:Individualism A, while A and B be present, individualism B these three Situation.In addition, character "/" herein, it is a kind of relation of "or" to typically represent forward-backward correlation object.
It will be appreciated that though XXX may be described using term first, second, third, etc. in embodiments of the present invention, but These XXX should not necessarily be limited by these terms.These terms are only used for XXX being distinguished from each other out.For example, implementation of the present invention is not being departed from In the case of example scope, the first XXX can also be referred to as the 2nd XXX, and similarly, the 2nd XXX can also be referred to as the first XXX.
Depending on linguistic context, word as used in this " if ", " if " can be construed to " ... when " or " when ... " or " in response to determining " or " in response to detection ".Similarly, depending on linguistic context, phrase " if it is determined that " or " such as Fruit detects (condition or event of statement) " can be construed to " when it is determined that when " or " in response to determine " or " when detection (statement Condition or event) when " or " in response to detect (condition or event of statement) ".
It should also be noted that, term " comprising ", "comprising" or its any other variant are intended to nonexcludability Comprising, so that commodity or system including a series of elements not only include those key elements, but also including without clear and definite The other element listed, or also include for this commodity or the intrinsic key element of system.In the feelings not limited more Under condition, the key element that is limited by sentence "including a ...", it is not excluded that in the commodity including the key element or system also Other identical element be present.
Under normal circumstances, wear-type virtual reality device first can write image to be displayed in buffer queue, then wear Display module in formula display device reads from buffer queue and renders this image to be displayed again, and now user can be to pass through The image that the display screen of wear-type virtual reality device is watched in virtual scene.In order to user be made normally to see void Intend scene in image, Fig. 1 be image write-in control method embodiment one provided in an embodiment of the present invention flow chart, this implementation The executive agent for the image write-in control method that example provides can be wear-type virtual reality device, as shown in figure 1, this method Comprise the following steps:
S101, the first reading caused by the read operation triggered in response to display module to nth frame image in buffer queue State, time variable value corresponding to the first reading state of renewal is the triggered time corresponding to the first reading state.
After wear-type virtual reality device is opened, the display module in wear-type virtual reality device can regularly be sent Image reading signal.Display module can control the reading of image to start and terminate according to this image reading signal, alternatively, In practical application, conventional image reading signal can be vertical synchronizing signal.
Display module in response to image reading control signal by triggering the reading to the nth frame image in buffer queue Operation, display module can also produce the first reading state while this read operation is triggered.Alternatively, the read operation of triggering Can represent that the operation that image reading starts can also represent the operation that image reading terminates, accordingly, this first reading State can be to read beginning state, represent that nth frame image starts to read;It can also be to read done state, represent nth frame figure As having read completion.
Wear-type virtual reality device can get triggering and be directed to nth frame image in response to this first reading state Read operation triggered time, triggered time of this read operation that is to say that the first reading state caused by read operation is corresponding Triggered time.Afterwards, wear-type virtual reality device can according to corresponding to the first reading state got triggered time Update time variable value corresponding to the first reading state.It is involved as described above, because the first reading state can be It can also be to read done state to read beginning state, and therefore, time variable value corresponding to the first reading state after renewal has May represent be nth frame image the reading time started, it is also possible to expression be nth frame image the reading end time.
S102, according to time variable corresponding to time variable value corresponding to the first reading state and current second reading state The time order and function relation of value, determine whether nth frame image reads and finish.
S103, if nth frame image reading finishes, N+1 two field pictures are write in buffer queue.
Wear-type virtual reality device can according to by renewal the first reading state corresponding to time variable value and Time variable value corresponding to the second current reading state determines whether nth frame image reads and finishes, current the herein be related to Time variable value corresponding to two reading states is existing before time variable value corresponding to the first reading state is updated One time variable value.It should be noted that in practical application, the first reading state and the second reading state are two different Reading state.Alternatively, if the first reading state is reads beginning state, the second reading state is reading done state;If First reading state is reads done state, then the second reading state is reading beginning state.
When the time that time variable value expression nth frame image reading corresponding to the first reading state starts, second reads shape Time variable value corresponding to state represents that N-1 two field pictures read the time terminated, then time variable corresponding to the first reading state Value is more than time variable value corresponding to the second reading state, now, shows that nth frame image is read out.Now, wear-type Virtual reality device keeps current wait state, and N+1 two field pictures will not be read from buffer queue.
When the time that time variable value expression nth frame image reading corresponding to the first reading state starts, second reads shape Time variable value corresponding to state represents the time that nth frame image reading terminates, then time variable value corresponding to the first reading state Less than time variable value corresponding to the second reading state, now, show that nth frame image has been read and finish, now, wear-type is empty Intending real world devices can continue to write N+1 two field pictures in buffer queue.
In the present embodiment, the display module of wear-type virtual reality device can be read nth frame image triggering in buffer queue Extract operation simultaneously produces first reading state simultaneously.Then, head-mounted display apparatus is in response to caused by this read operation It one reading state, that is to say the triggered time corresponding to the first reading state, recycle so as to obtain the triggered time of read operation Triggered time corresponding to first reading state updates time variable value corresponding to the first reading state.Read according to the first of renewal Precedence relationship between time variable corresponding to time variable corresponding to state and the second current reading state determines to wear Whether formula virtual reality device has finished nth frame image reading.It that is to say that wear-type virtual reality device is write to image Entering operation is controlled by the performance of image reading.If the read operation of image has been completed, wear-type virtual reality Equipment can just start to write N+1 two field pictures in buffer queue;If the read operation of image is not completed, wear-type is virtual Real world devices can be waited for, and not carry out write operation to image.So can with ensure the reading of image and write-in all It is respectively completed, is not in the situation of read/write conflict, avoids the occurrence of picture tear, improve the quality that image is shown.
Fig. 2 is the flow chart of image write-in control method embodiment two provided in an embodiment of the present invention, as shown in Fig. 2 should Method comprises the following steps:
S201, the first reading caused by the read operation triggered in response to display module to nth frame image in buffer queue State, time variable value corresponding to the first reading state of renewal is the triggered time corresponding to the first reading state.
Above-mentioned steps S201 implementation procedures are similar to the corresponding steps of previous embodiment, may refer to implement as shown in Figure 1 Or else associated description in example, is repeated herein.
S202, according to time variable corresponding to time variable value corresponding to the first reading state and current second reading state The time order and function relation of value, determine whether nth frame image reads and finish.
According to the associated description in embodiment one, the first reading state and the second reading state are two different reading shapes State.It is readily conceivable that, it may appear that following two situations:When first reading state is reads beginning state, the second reading state To read done state;When first reading state is reads done state, the second reading state is reading beginning state.
Now, when the first reading state is reading beginning state and the second reading state is to read done state, first Time variable value corresponding to reading state is to read time variable value corresponding to beginning state, when corresponding to the second reading state Between variate-value be read done state corresponding to time variable value.Associated description in embodiment one can obtain:If Read time variable value corresponding to beginning state and be less than time variable value corresponding to reading done state, it is determined that nth frame image Reading finishes.
When the first reading state is reading done state and the second reading state is to read beginning state, first reads shape Time variable value corresponding to state is to read time variable value corresponding to done state, time variable corresponding to the second reading state Value is to read time variable value corresponding to beginning state.Similarly, the associated description in embodiment one can obtain:If Read time variable value corresponding to beginning state and be more than time variable value corresponding to reading done state, it is determined that nth frame image Read.
S203, if nth frame image reading finishes, generate N+1 two field pictures.
It is different from watching common image, scene that user watches in virtual scene build in advance, 360 ° and The posture of user is closely related.Therefore, it is necessary to real-time according to the current posture of user after nth frame image reading Generate N+1 two field pictures to be written.
Alternatively, after nth frame image reading, wear-type virtual reality device can generate in the following manner N+1 two field pictures:
First, the attitude data of user is obtained.
And then determine to be located at user's field range according to the position of each object in attitude data and virtual reality scenario Interior object.
Finally, generation is with being located at user's corresponding N+1 two field pictures of object within sweep of the eye.
Specifically, nine axle sensors being configured in wear-type virtual reality device can be gathered with default acquisition interval The attitude data of user.Alternatively, this attitude data can be stored in a fixed memory space.Wear-type virtual reality Equipment can be to get the current attitude data of user, alternatively from this memory space, and the attitude data collected can be with For the attitude data of user's head, such as the angle of pitch of user's head, deflection angle and course angle.
In 360 ° of virtual reality scenario, the field range of user is limited, and field range can pass through visual angle Size is measured, and the eyes visual angle of adult is generally 110 ° or so.Meanwhile after the completion of virtual reality scenario is built, wear-type Virtual reality device can be to know the position coordinates of each object in virtual reality scenario, it is alternatively possible to by virtual scene In each object position coordinates write a file in.Therefore, what wear-type virtual reality device can be current according to user Attitude data determines the coordinate range corresponding to user's field range, and filters out from file that to belong to user within the vision Object.Finally, wear-type virtual reality device can be with according to the object generation N+1 two field pictures filtered out.
S204, N+1 two field pictures are write in buffer queue.
After nth frame image reading, wear-type virtual reality device can with by the N+1 two field pictures of generation write In buffer queue.
In addition, the image shown in wear-type virtual reality device all has stereoeffect, this stereoeffect It is by by two there is the left-eye image of parallax and eye image to be synthesized and caused.Want normally to show that a frame has The image of stereoeffect just needs to read and render left-eye image and eye image respectively, that is to say needs respectively to left-eye image Controlled exactly with the opportunity of eye image write-in.
In order to control the opportunity that left-eye image and eye image write exactly, it is alternatively possible to which buffer queue is drawn It is divided into two spaces, that is, is used for the first space for storing left-eye image and the second space for storing eye image.Wherein, on Text and the nth frame image and N+1 two field pictures that are hereinafter related to can be left-eye image simultaneously or be right simultaneously Eye pattern picture.
If it is nth frame left-eye image to read the nth frame image finished, N+1 frames left-eye image is write into buffer queue The first space in.If it is nth frame eye image to read the nth frame image finished, N+1 frames eye image is write and cached In the second space of queue.
Herein it should be noted that the present invention is not defined to the order of image reading, corresponding to a two field picture Left-eye image and during eye image is read out, can first read left-eye image or first read right eye figure Picture.
In the present embodiment, the image for carrying out writing control is to need to be generated in real time according to the attitude data of user, The attitude data that content that user watched by wear-type virtual reality device and user can so be ensured is matching.So The left-eye image generated in real time and eye image are respectively written into buffer queue again afterwards.This buffer queue is to have carried out simultaneously Space division is crossed, that is to say and buffer queue is divided into the first space and second space, and the first space and second space It is respectively used to store left-eye image and eye image.It can so avoid occurring left-eye image and eye image being cached in jointly The situation of the image reading mistake easily occurred in one buffer queue for not dividing space, for example, left eye figure should be read As when but mistakenly have read eye image, allow that wear-type virtual reality device is more independent, controls image exactly Write-in opportunity.
Fig. 3 is the structural representation of image write-in control device embodiment one provided in an embodiment of the present invention, such as Fig. 3 institutes Show, the image write-in control device includes:Update module 11, state determining module 12, writing module 13.
Update module 11, for being produced in response to display module to the read operation that nth frame image in buffer queue triggers The first reading state, update the first reading state corresponding to time variable value be the triggered time corresponding to the first reading state.
State determining module 12, for the time variable value according to corresponding to the first reading state and current second reading state The time order and function relation of corresponding time variable value, determine whether nth frame image reads and finish.
Writing module 13, if being finished for nth frame image reading, N+1 two field pictures are write in buffer queue.
Fig. 3 shown devices can perform the method for embodiment illustrated in fig. 1, the part that the present embodiment is not described in detail, can join Examine the related description to embodiment illustrated in fig. 1.In implementation procedure and the technique effect embodiment shown in Figure 1 of the technical scheme Description, will not be repeated here.
Fig. 4 is the structural representation of image write-in control device embodiment two provided in an embodiment of the present invention, such as Fig. 4 institutes Show, on the basis of embodiment illustrated in fig. 3, the adjusting module 12 in the image write-in control device is used for:If read beginning state Corresponding time variable value, which is less than, reads time variable value corresponding to done state, it is determined that nth frame image reading finishes.
Alternatively, buffer queue includes being used for the first space for storing left-eye image and second for storing eye image Space, nth frame image are nth frame left-eye image or nth frame eye image;
Writing module 13 in the image write-in control device is specifically used for:
If nth frame image is nth frame left-eye image, N+1 frames left-eye image is write to the first space of buffer queue In, and if nth frame image is nth frame eye image, by N+1 frames eye image write buffer queue second space In.
Alternatively, the image write-in control device also includes:Acquisition module 21, object determining module 22 and generation module 23。
Acquisition module 21, for obtaining the attitude data of user.
Object determining module 22, for determining position according to the position of each object in attitude data and virtual reality scenario In user's object within the vision.
Generation module 23, for generate with positioned at the user corresponding N+1 frames left-eye image of object and N within sweep of the eye + 1 frame eye image.
Fig. 4 shown devices can perform the method for embodiment illustrated in fig. 2, the part that the present embodiment is not described in detail, can join Examine the related description to embodiment illustrated in fig. 2.In implementation procedure and the technique effect embodiment shown in Figure 2 of the technical scheme Description, will not be repeated here.
The built-in function and structure of image write-in control device are the foregoing described, in a possible design, image is write Entering the structure of control device can realize that for an electronic equipment, the electronic equipment be such as wear-type virtual reality device.Fig. 5 is this The structural representation for the electronic equipment embodiment one that inventive embodiments provide, as shown in figure 5, the electronic equipment includes:Memory 31, and the processor 32 being connected with memory, memory 31 is used to store to be carried in any of the above-described embodiment of electronic equipment execution The program of the image write-in control method of confession, processor 32 are configurable for performing the program stored in memory 31.
Program includes one or more computer instruction, wherein, one or more computer instruction is performed by processor 32 When can realize following steps:
First reading state caused by the read operation triggered in response to display module to nth frame image in buffer queue, It is the triggered time corresponding to the first reading state to update time variable value corresponding to the first reading state;
According to time variable value corresponding to time variable value corresponding to the first reading state and current second reading state Time order and function relation, determine whether nth frame image reads and finish;
If nth frame image reading finishes, N+1 two field pictures are write in buffer queue.
Alternatively, processor 32 is additionally operable to perform all or part of step in aforementioned approaches method step.
Wherein, communication interface 33 can also be included in the structure of electronic equipment, for electronic equipment and other equipment or logical Communication network communicates.
Fig. 6 is a kind of inside configuration structure schematic diagram of wear-type virtual reality device provided in an embodiment of the present invention.
Display unit 401 can include display panel, display panel be arranged on wear-type virtual display device 400 towards The side surface of user's face can be an entire panel or be left panel and the right side for corresponding to user's left eye and right eye respectively Plate.Display panel can be electroluminescent (Electroluminescent, abbreviation EL) element, liquid crystal display or have similar The miniscope or retina of structure can directly display or similar laser scan type display.
Virtual image optical unit 402 shoots the image shown by display unit 401 in an exaggerated way, and allows user to press Image shown by the virtual image observation of amplification.Can be from content as the display image being output on display unit 401 The image for the virtual scene that reproduction equipment (Blu-ray Disc or DVD player) or streaming media server provide or use are outside The image for the reality scene that camera 410 is shot.In some embodiments, virtual image optical unit 402 can include lens unit, Such as spherical lens, non-spherical lens, Fresnel Lenses etc..
Input operation unit 403 include it is at least one be used for performing the functional unit of input operation, such as button, button, Switch or other there is the parts of similar functions, user instruction is received by functional unit, and export to control unit 407 Instruction.
State information acquisition unit 404 is used for the status information for obtaining the user of wearing wear-type virtual display device 400. State information acquisition unit 404 can include various types of sensors, for itself detecting status information, and can be by logical Believe that unit 405 from external equipment, such as other multi-functional terminal ends of smart mobile phone, watch and user's wearing, obtains status information. State information acquisition unit 404 can obtain the positional information and/or attitude information on the head of user.State information acquisition unit 404 can include gyro sensor, acceleration transducer, global positioning system (Global Positioning System, Abbreviation GPS) sensor, geomagnetic sensor, doppler effect sensors, infrared sensor, one in radio-frequency field intensity sensor It is individual or multiple.In addition, state information acquisition unit 404 obtains the state of the user of wearing wear-type virtual display device 400 Information, such as obtain mode of operation (such as whether user dresses wear-type virtual display device 400), the action shape of user of user State (it is such as static, walk, run and such mobile status, the posture of hand or finger tip, eyes being opened or closed state, regard Line direction, pupil size), the state of mind (whether user is immersed in the shown image of observation and the like), even Physiological status.
Communication unit 405 performs the coding with the communication process of external device (ED), modulation and demodulation processing and signal of communication And decoding process.In addition, control unit 407 can send transmission data from communication unit 405 to external device (ED).Communication mode can To be wired or wireless, for example, mobile high definition link (Mobile High-Definition Link, abbreviation MHL) or USB (Universal Serial Bus, abbreviation USB), high-definition media interface (High Definition Multimedia Interface, abbreviation HDMI), Wireless Fidelity (Wireless Fidelity, abbreviation Wi-Fi), Bluetooth communication Or low-power consumption bluetooth communication, and mesh network of IEEE802.11s standards etc..In addition, communication unit 405 can be according to width Band CDMA (Wideband Code Division Multiple Access, abbreviation W-CDMA), Long Term Evolution (Long Term Evolution, abbreviation LTE) and similar standard operation cellular radio transceiver.
In some embodiments, wear-type virtual display device 400 can also include memory cell, and memory cell 406 is to match somebody with somebody It is set to the mass-memory unit with solid-state drive (Solid State Drives, abbreviation SSD) etc..Some embodiments In, memory cell 406 can store application program or various types of data.Set for example, user is virtually shown using wear-type The content of standby 400 viewing can be stored in memory cell 406.
In some embodiments, wear-type virtual display device 400 can also include control unit, and control unit 407 can be with Including computer processing unit (Central Processing Unit, abbreviation CPU) or other there is setting for similar functions It is standby.In some embodiments, control unit 407 can be used for performing the application program that memory cell 406 stores, or control unit 407 can be also used for performing method, function and the circuit of operation disclosed in the application some embodiments.
Graphics processing unit 408 is used to perform signal transacting, for example the picture signal to being exported from control unit 407 is related Image quality correction, and by its conversion of resolution be the resolution ratio according to the screen of display unit 401.Then, display is driven Moving cell 404 selects the often row pixel of display unit 401 successively, and scans the often row pixel of display unit 401 successively line by line, because And provide the picture element signal based on the picture signal through signal transacting.
In some embodiments, wear-type virtual display device 400 can also include external camera.External camera 410 can be with 400 main body front surface of wear-type virtual display device is arranged on, external camera 410 can be one or more.External camera 410 can obtain three-dimensional information, and be also used as range sensor.In addition, the position of reflected signal of the detection from object Putting sensitive detector (Position Sensitive Detector, abbreviation PSD) or other kinds of range sensor can be with It is used together with external camera 410.External camera 410 and range sensor can be used for detection wearing wear-type, and virtually display is set Body position, posture and the shape of standby 400 user.In addition, user can directly be seen by external camera 410 under certain condition See or preview reality scene.
In some embodiments, wear-type virtual display device 400 can also include sound processing unit, sound processing unit 411 can perform the sound quality correction or sound amplification of the voice signal exported from control unit 407, and input sound letter Number signal transacting etc..Then, sound I/O unit 412 comes after acoustic processing to outside output sound and input From the sound of microphone.
It should be noted that the structure or part in Fig. 6 shown in bold box can be independently of wear-type virtual display devices Outside 400, such as external treatment system can be arranged on, such as computer system, in match somebody with somebody with wear-type virtual display device 400 Close and use;Or structure shown in dotted line frame or part can be arranged on the inside of wear-type virtual display device 400 or surface On.
Device embodiment described above is only schematical, wherein the unit illustrated as separating component can To be or may not be physically separate, it can be as the part that unit is shown or may not be physics list Member, you can with positioned at a place, or can also be distributed on multiple NEs.It can be selected according to the actual needs In some or all of module realize the purpose of this embodiment scheme.Those of ordinary skill in the art are not paying creativeness Work in the case of, you can to understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can Realized by the mode of general hardware platform necessary to add, naturally it is also possible to come real by way of hardware and software combination It is existing.Based on such understanding, the part that above-mentioned technical proposal substantially contributes to prior art in other words can be with product Form embody, the computer product can store in a computer-readable storage medium, such as ROM/RAM, magnetic disc, CD Deng, including some instructions are causing a computer installation (can be personal computer, server, or network equipment etc.) Perform the method described in some parts of each embodiment or embodiment.
Finally it should be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although The present invention is described in detail with reference to the foregoing embodiments, it will be understood by those within the art that:It still may be used To be modified to the technical scheme described in foregoing embodiments, or equivalent substitution is carried out to which part technical characteristic; And these modification or replace, do not make appropriate technical solution essence depart from various embodiments of the present invention technical scheme spirit and Scope.

Claims (8)

  1. A kind of 1. image write-in control method, it is characterised in that including:
    First reading state caused by the read operation triggered in response to display module to nth frame image in buffer queue, renewal Time variable value corresponding to first reading state is the triggered time corresponding to first reading state;
    According to time variable value corresponding to time variable value corresponding to first reading state and current second reading state Time order and function relation, determine whether the nth frame image reads and finish;
    If the nth frame image reading finishes, N+1 two field pictures are write in the buffer queue.
  2. 2. according to the method for claim 1, it is characterised in that if first reading state is reading beginning state, Second reading state is reading done state;If first reading state is reads done state, described second reads State is taken to read beginning state.
  3. 3. according to the method for claim 2, it is characterised in that it is described according to corresponding to first reading state when anaplasia The time order and function relation of time variable value, determines whether the nth frame image is read corresponding to value and current second reading state Take it is complete, including:
    If time variable value corresponding to the reading beginning state is less than time variable value corresponding to the reading done state, Determine that the nth frame image reading finishes.
  4. 4. according to the method in any one of claims 1 to 3, it is characterised in that the buffer queue includes being used to store First space of left-eye image and the second space for storing eye image, the nth frame image be nth frame left-eye image or Nth frame eye image;
    It is described to write N+1 two field pictures in the buffer queue, including:
    If the nth frame image is nth frame left-eye image, N+1 frames left-eye image is write the first of the buffer queue In space;
    If the nth frame image is nth frame eye image, N+1 frames eye image is write the second of the buffer queue In space.
  5. 5. according to the method for claim 4, it is characterised in that N+1 two field pictures are write into the buffer queue described In before, in addition to:
    Obtain the attitude data of user;
    Determined according to the position of each object in the attitude data and virtual reality scenario within the vision positioned at user Object;
    Generation is located at the user corresponding N+1 frames left-eye image of object and the N+1 frames right side within sweep of the eye with described Eye pattern picture.
  6. A kind of 6. image write-in control device, it is characterised in that including:
    Update module, for the first reading state triggered in response to display module to nth frame image in buffer queue, update institute It is the triggered time corresponding to first reading state to state time variable value corresponding to the first reading state;
    State determining module, for the time variable value according to corresponding to first reading state and current second reading state pair The time order and function relation for the time variable value answered, determine whether the nth frame image reads and finish;
    Writing module, if being finished for the nth frame image reading, N+1 two field pictures are write in the buffer queue.
  7. 7. a kind of electronic equipment, it is characterised in that including:Memory, and the processor being connected with the memory;
    The memory, for storing one or more computer instruction, wherein, one or more computer instruction supplies institute State processor and call execution;
    The processor, for performing one or more computer instruction to realize any one of claim 1 to 5 Image write-in control method.
  8. 8. a kind of computer-readable recording medium for being stored with computer program, it is characterised in that the computer program makes meter The image write-in control method as any one of claim 1 to 5 is realized when calculation machine performs.
CN201711047759.5A 2017-10-31 2017-10-31 Image write-in control method, device and electronic equipment Pending CN107743223A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711047759.5A CN107743223A (en) 2017-10-31 2017-10-31 Image write-in control method, device and electronic equipment
PCT/CN2017/113567 WO2019085109A1 (en) 2017-10-31 2017-11-29 Image writing control method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711047759.5A CN107743223A (en) 2017-10-31 2017-10-31 Image write-in control method, device and electronic equipment

Publications (1)

Publication Number Publication Date
CN107743223A true CN107743223A (en) 2018-02-27

Family

ID=61233469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711047759.5A Pending CN107743223A (en) 2017-10-31 2017-10-31 Image write-in control method, device and electronic equipment

Country Status (2)

Country Link
CN (1) CN107743223A (en)
WO (1) WO2019085109A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114153414A (en) * 2021-11-27 2022-03-08 深圳曦华科技有限公司 Image anti-tearing method and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160078846A1 (en) * 2014-09-17 2016-03-17 Mediatek Inc. Processor for use in dynamic refresh rate switching and related electronic device and method
CN106296566A (en) * 2016-08-12 2017-01-04 南京睿悦信息技术有限公司 A kind of virtual reality mobile terminal dynamic time frame compensates rendering system and method
CN106454312A (en) * 2016-09-29 2017-02-22 乐视控股(北京)有限公司 Image processing method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9971153B2 (en) * 2014-03-29 2018-05-15 Frimory Technologies Ltd. Method and apparatus for displaying video data
CN106095366B (en) * 2016-06-07 2019-01-15 北京小鸟看看科技有限公司 A kind of method, apparatus and virtual reality device shortening picture delay
CN106331823B (en) * 2016-08-31 2019-08-20 北京奇艺世纪科技有限公司 A kind of video broadcasting method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160078846A1 (en) * 2014-09-17 2016-03-17 Mediatek Inc. Processor for use in dynamic refresh rate switching and related electronic device and method
CN106296566A (en) * 2016-08-12 2017-01-04 南京睿悦信息技术有限公司 A kind of virtual reality mobile terminal dynamic time frame compensates rendering system and method
CN106454312A (en) * 2016-09-29 2017-02-22 乐视控股(北京)有限公司 Image processing method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114153414A (en) * 2021-11-27 2022-03-08 深圳曦华科技有限公司 Image anti-tearing method and related device

Also Published As

Publication number Publication date
WO2019085109A1 (en) 2019-05-09

Similar Documents

Publication Publication Date Title
US11366516B2 (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
US10534428B2 (en) Image processing device and image processing method, display device and display method, and image display system
CN109002164A (en) It wears the display methods for showing equipment, device and wears display equipment
CN107610044A (en) Image processing method, computer-readable recording medium and virtual reality helmet
CN107835404A (en) Method for displaying image, equipment and system based on wear-type virtual reality device
CN107688240A (en) Wear the control method, equipment and system of display device
US11113379B2 (en) Unlocking method and virtual reality device
CN108021346A (en) VR helmets show method, VR helmets and the system of image
CN107743223A (en) Image write-in control method, device and electronic equipment
CN107704397A (en) Applied program testing method, device and electronic equipment
CN107589841A (en) Wear the operating method of display device, wear display device and system
CN114255204A (en) Amblyopia training method, device, equipment and storage medium
CN107945100A (en) Methods of exhibiting, virtual reality device and the system of virtual reality scenario
CN109327519A (en) Flow data synchronous method, device, equipment and storage medium
CN109408011A (en) Wear display methods, device and the equipment of display equipment
KR102312601B1 (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
CN107844177A (en) Device parameter method of adjustment, device and electronic equipment
CN107958478B (en) Rendering method of object in virtual reality scene and virtual reality head-mounted equipment
CN107479842A (en) Character string display method and display device is worn in virtual reality scenario
CN108037902A (en) Wear the display methods, equipment and system of display device
KR101078928B1 (en) Apparatus for displaying three-dimensional image and method for displaying image in the apparatus
US20240028112A1 (en) Image display device and image display method
CN107833265A (en) A kind of image switching methods of exhibiting and virtual reality device
CN107621881A (en) Virtual content control method and control device
KR102286517B1 (en) Control method of rotating drive dependiong on controller input and head-mounted display using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180227

RJ01 Rejection of invention patent application after publication