WO2019033510A1 - Procédé de reconnaissance d'un programme d'application vr et dispositif électronique - Google Patents

Procédé de reconnaissance d'un programme d'application vr et dispositif électronique Download PDF

Info

Publication number
WO2019033510A1
WO2019033510A1 PCT/CN2017/103193 CN2017103193W WO2019033510A1 WO 2019033510 A1 WO2019033510 A1 WO 2019033510A1 CN 2017103193 W CN2017103193 W CN 2017103193W WO 2019033510 A1 WO2019033510 A1 WO 2019033510A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
interface
black image
black
gray value
Prior art date
Application number
PCT/CN2017/103193
Other languages
English (en)
Chinese (zh)
Inventor
孟亚州
Original Assignee
歌尔科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔科技有限公司 filed Critical 歌尔科技有限公司
Publication of WO2019033510A1 publication Critical patent/WO2019033510A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection

Definitions

  • the present invention relates to the field of virtual reality technologies, and in particular, to a method and an electronic device for identifying a VR application.
  • VR Virtual Reality
  • the invention provides a method for identifying a VR application and an electronic device for efficiently identifying a VR application from a large number of applications.
  • the invention provides a method for identifying a VR application, comprising:
  • the at least four image blocks are black image blocks, identifying a black image region in the interface image according to a gray value of a pixel point included in the interface image;
  • the to-be-identified application is a VR application.
  • determining whether the at least four reference image blocks are all black image blocks according to the gray value of the pixel points included in the at least four reference image blocks comprises: acquiring the at least four reference image blocks The number of pixels whose gray value is smaller than the specified gray threshold; the ratio of the number of pixels whose gray value is smaller than the specified gray threshold to the total number of pixels included in the at least four reference image blocks; The ratio is greater than or equal to the set ratio threshold, and then determining that the at least four image blocks are black image blocks.
  • the method further includes: if the at least four reference image blocks are not all black image blocks, determining that the to-be-identified application is a non-VR application.
  • identifying the black image area in the interface image according to the gray value of the pixel point included in the interface image comprising: acquiring, in the interface image, that the gray value is less than the specified gray threshold An area in which the pixel point is located; acquiring a geometric envelope map of the suspect black image area according to the coordinates of the pixel point included in the suspect black image area; if the geometric envelope image exists and the top angle of the interface image The coincident apex angle or the longitudinal center axis of the geometric envelope map coincides with the longitudinal center axis of the interface image, and it is determined that the suspect black image region belongs to the black image region of the interface image.
  • judging the gray value according to the pixel points included in the black image area Whether the left and right portions of the black image area correspond to each other includes: determining whether the black image area is symmetrical along a longitudinal central axis of the interface image according to a gray value of a pixel point included in the black image area.
  • determining whether the black image area is symmetrical along a longitudinal central axis of the interface image according to a gray value of a pixel point included in the black image area comprises: along a longitudinal central axis of the interface image The black image area is divided into two sub-image areas; the similarity rate of the two sub-image areas is calculated according to the total number of pixels included in the two sub-image areas and the number of symmetric pixel points having the same gray value; If the similarity ratio is greater than the set similarity threshold, it is determined that the black image area is symmetrical along a longitudinal center axis of the interface image.
  • calculating a similarity ratio of the two sub-image regions according to a total number of pixels included in the two sub-image regions and a number of symmetric pixel points having the same gray value in the two sub-image regions including : establishing a coordinate system with the horizontal central axis of the interface image as an abscissa axis and the longitudinal central axis of the interface image as an ordinate axis; in the two sub-image regions, obtaining the same ordinate and opposite the abscissa a pixel as a symmetric pixel; counting the number of symmetric pixel points having the same gray value, and obtaining an average of the total number of pixels included in the two sub-image regions; symmetric pixels having the same gray value
  • the ratio of the number of points to the average value determines the similarity rate of the two sub-image areas.
  • the present invention also provides a VR application identification electronic device, including: a memory and a processor;
  • the memory is configured to: store one or more instructions; the processor is configured to execute the one or more computer instructions for: obtaining at least four specified sizes from four corners of an interface image of an application to be identified Reference image block;
  • the at least four image blocks are black image blocks, identifying a black image region in the interface image according to a gray value of a pixel point included in the interface image;
  • the processor is specifically configured to: acquire, in the at least four reference image blocks, a number of pixels whose gray value is smaller than a specified gray threshold; and calculate that the gray value is smaller than a specified gray threshold. a ratio of the number of pixels to the total number of pixels included in the at least four reference image blocks; if the ratio is greater than or equal to the set ratio threshold, determining that the at least four image blocks are black image blocks.
  • the processor is specifically configured to: obtain, in the interface image, an area where a pixel point whose gray value is smaller than the specified gray level threshold is used as a suspicious black image area; according to the suspect black image area a coordinate of the included pixel, acquiring a geometric envelope of the suspect black image region; if the geometric envelope has an apex angle coincident with a vertex of the interface image or a longitudinal direction of the geometric envelope The axis coincides with the longitudinal center axis of the interface image, and it is determined that the suspect black image area belongs to the black image area of the interface image.
  • the VR application identification method and the electronic device provided by the present invention determine whether the left and right parts of the interface image correspond to each other by judging whether the black image block is a black image block at the four corners of the interface image of the application to be identified. Further, when it is determined that the left and right portions of the interface image correspond, the application to be identified is determined to be a VR application. This method is not limited by the package name or name of the application, and the recognition efficiency of the VR application is high.
  • FIG. 1a is a schematic flowchart of a method for identifying a VR application according to an embodiment of the present invention
  • Figure 1b is a schematic diagram of an interface image of a VR application
  • FIG. 2 is a schematic flowchart of another method for identifying a VR application according to an embodiment of the present invention.
  • 2b is a schematic diagram of selecting a reference image block at four corners of an interface image according to an embodiment of the present invention
  • 2c is a schematic diagram of a geometric envelope diagram for acquiring a suspicious black image region in an interface image according to an embodiment of the present invention
  • 2d is another schematic diagram of acquiring a geometric envelope diagram of a suspicious black image area in an interface image according to an embodiment of the present invention
  • 2e is a schematic diagram of establishing a coordinate system on an interface image according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram showing the internal configuration of the head mounted display device 400 in some embodiments provided by the present invention.
  • FIG. 1 is a schematic flowchart of a method for identifying a VR application according to an embodiment of the present invention. Referring to FIG. 1a, the method includes:
  • Step 101 Acquire at least four reference image blocks of a specified size from four corners of the interface image of the application to be identified.
  • Step 102 Determine, according to the gray value of the pixel points included in the at least four reference image blocks, whether the at least four reference image blocks are black image blocks; if yes, perform step 103; if not, execute Step 106.
  • Step 103 Identify a black image area in the interface image according to a gray value of a pixel point included in the interface image.
  • Step 104 Determine whether the left and right portions of the black image region correspond according to the gray value of the pixel included in the black image region; if yes, execute step 105; if no, execute step 106.
  • Step 105 Determine that the to-be-identified application is a VR application.
  • Step 106 Determine that the to-be-identified application is a non-VR application.
  • the application to be identified may be any application installed on a VR device or a terminal device such as a mobile phone.
  • the interface image is a corresponding image of any interface of the application to be recognized, and includes an interface style and an interface element of the application to be recognized.
  • the application store or the application scenario can be accessed to find the application to be identified, the application to be recognized is opened, and the interface image of any interface of the application to be recognized is obtained.
  • the application to be identified can be found on the desktop or in the application list, and the application to be recognized is opened and an interface image of any interface of the application to be recognized is obtained.
  • the shape of the interface image is a rectangle, and the four corners of the interface image are the areas near the four vertices of the interface image.
  • at least one reference image block may be acquired at each corner of the interface image.
  • the size of the reference image block is specified, and the specified size is associated with the size of the interface image.
  • the display interface of the VR application includes a content image area and a black image area.
  • the content image area is used to display interface information of the VR content or the VR application
  • the black image area is a non-content image area.
  • the display interface of the VR application includes two content image areas that are parallel to the direction in which the user's left and right eyes are connected. It can be considered that in the rectangular interface image of the VR application, the two content image areas are removed, and the remaining part is the black image area.
  • 2b is a schematic diagram of an interface image of the VR application. In FIG. 1b, the mesh portion is a content image area, and the portion outside the mesh is a black image area.
  • both content image areas have a certain degree of barrel distortion. Furthermore, this barrel distortion causes the four corners of the interface image of the VR application to be black. In the interface image of a non-VR application, the four corners of the image are not necessarily black. Therefore, the embodiment can determine that the reference image block on the four corners of the interface image is No black, to initially screen out non-VR applications.
  • the black image contains pixels with a gray value that is usually small, usually between 0-10. Therefore, optionally, after acquiring at least four reference image blocks, determining, according to the gray value of the pixel points included in each reference image block, whether the reference image block is a black image block to implement preliminary screening of non-VR applications .
  • the interface image of the non-VR application may also have a case where all four corners are black. Therefore, if at least four reference image blocks acquired in step 101 are not all black image blocks, the interface image may be considered as an interface image of a non-VR application. If at least four reference image blocks acquired in step 101 are black image blocks, the interface image may be considered to be an interface image of the VR application. However, it is necessary to further determine that the interface image is indeed an interface image of the VR application.
  • the interface image is determined to be an interface image of the VR application according to the corresponding features of the left and right portions of the black image area in the interface image.
  • the black image area in the interface image may be identified according to the gray value of the pixel points included in the interface image.
  • the interface displayed by the VR application is a dual screen interface, and the left and right screens respectively correspond to the positions of the left eye and the right eye of the user. Further, at the same time, the display contents seen by the left and right eyes of the user are different to produce a strong stereoscopic depth. Furthermore, it can be considered that if an interface image is an interface image of a VR application, the left and right portions of the black image area in the interface image must correspond.
  • this step can determine whether the left and right portions of the interface image correspond to each other by the gray value of the pixel point included in the black image area of the interface image. If so, it can be determined that the application to be identified is a VR application, otherwise it is a non-VR application.
  • the application to be identified by determining whether the four corners of the interface image of the application to be identified are all black image blocks to initially screen the non-VR application program, it is further determined whether the left and right portions of the interface image correspond. Further, when determining the corresponding left and right portions of the interface image, the application to be identified is determined.
  • the order is a VR application. This method is not limited by the package name or name of the application, and the recognition efficiency of the VR application is high.
  • the interface image may be a screen capture image obtained by taking a screen shot for any interface of the application to be identified, or may be performed on any interface of the application to be identified. Take the captured image.
  • the technical solution of the embodiment of the present invention is further described in the following section by taking a screen shot image of the object to be identified as an example.
  • FIG. 2 is a schematic flowchart of another method for identifying a VR application according to an embodiment of the present invention. Referring to FIG. 2a, the method includes:
  • Step 201 Obtain at least four reference image blocks of a specified size from four corners of the screen capture image of the application to be identified;
  • Step 202 Acquire, in the at least four reference image blocks, a number of pixels whose gray value is smaller than a specified gray threshold;
  • Step 203 Calculate a ratio of a number of pixel points whose gray value is smaller than a specified gray level threshold to a total number of pixel points included in the at least four reference image blocks.
  • Step 204 Determine whether the ratio is less than a set proportional threshold. If yes, go to step 210; if no, go to step 205.
  • Step 205 Identify a black image area in the screen capture image according to a gray value of a pixel point included in the screen capture image.
  • Step 206 dividing the black image area into two left and right sub-image areas along a longitudinal central axis of the screen capture image
  • Step 207 Calculate a similarity ratio of the two sub-image regions according to a total number of pixels included in the two sub-image regions and a number of symmetric pixel points with the same gray value.
  • Step 208 Determine, according to the similarity ratio of the two sub-image regions, whether the black image region is symmetric along a longitudinal central axis of the screen capture image; if yes, perform step 209; if no, perform step 210.
  • Step 209 Determine that the to-be-identified application is a VR application.
  • Step 210 Determine that the to-be-identified application is a non-VR application.
  • step 201 when at least four reference image blocks of a specified size are acquired from the four corners of the screen capture image of the application to be identified, the four vertices of the screen capture image may be used as a starting point for image selection, and the specified size is selected on the screen capture image.
  • the image area obtains the at least four reference image blocks.
  • only one square reference image block may be taken at each corner of the screen capture image.
  • four reference image blocks rect1, rect2, rect3, and rect4 are selected on the screen image, and the vertices of each reference image block coincide with the vertices of the screen image.
  • the advantage of selecting the starting point of the vertices of the screen image as the image is that, for the screen image with different degrees of distortion of the content image area, the possibility of including the content image in the reference image block can be reduced, and the black image is further improved.
  • the accuracy of the block identification is that, for the screen image with different degrees of distortion of the content image area, the possibility of including the content image in the reference image block can be reduced, and the black image is further improved.
  • the content image area on a VR application interface is highly distorted, resulting in a smaller black image area at the four corners of the VR application interface.
  • the starting point selected by the vertex of the screen image as the image can avoid the pixel points selected to the content image area.
  • the specified size is associated with the size of the screenshot image.
  • the selected reference image block is more effective, but the embodiment of the present invention does not limit the size of the specified size.
  • the number C of pixels of the at least four reference image blocks whose gray value is smaller than the specified gray threshold may be acquired.
  • the specified gray value threshold is small, usually a single digit.
  • the gray value of the pixel may be acquired first.
  • obtaining the gray value of the pixel included in the reference image block may adopt the following optional methods:
  • Gray R * 0.3 + G * 0.59 + B * 0.11;
  • Gray (R*30+G*59+B*11)/100
  • R, G, and B are the values of any pixel on the reference image block on the three color components of red, green, and blue, respectively, and Gray is the calculated gray value of the pixel.
  • step 203 the ratio P1 of the number of pixel points whose gray value is smaller than the specified gray level threshold to the total number of pixel points included in the reference image block is calculated, and the following formula can be used:
  • a is the number of pixels included in the longitudinal direction of the reference image block
  • b is the number of pixels included in the height direction of the reference image block
  • N is the total number of reference image blocks, N ⁇ 4.
  • step 204 in the present embodiment, preferably, when the set proportional threshold is 99%, the recognition efficiency and accuracy of the black image block are high. That is, if P1 is greater than 99%, it may be determined that at least four reference image blocks acquired in step 201 are black image blocks.
  • the black image area in the screen shot image refers to a portion of the screen shot image other than the content image area.
  • the suspicious black image region in which the grayscale value is smaller than the specified grayscale threshold is identified from the screen capture image, and the black image region is distinguished from the suspect black image region.
  • the image content displayed in the content image area of the screen shot image has black portions, and the black portion contains pixel points whose gray value is also smaller than the specified gray scale threshold.
  • the suspect black image area may contain black portions in the content image area.
  • the black image area of the screen capture image may be identified from the suspect black image area by using the following method:
  • a regular geometric figure can be used to draw the maximum contour of the suspect black image area as a geometric envelope of the suspect black image area.
  • the maximum value of the abscissa of the pixel, the minimum value of the abscissa, the maximum value of the ordinate, and the ordinate of the pixel in the suspect black image area may be calculated first.
  • the minimum value, and the geometric envelope diagram of the suspect black image area is determined by these four maximum values.
  • the geometric envelope map of the suspect black image area After determining the geometric envelope map of the suspect black image area, determining that the geometric envelope map is There is an apex angle that coincides with the apex angle of the screen shot image, or whether the longitudinal center axis of the geometric envelope map coincides with the longitudinal center axis of the screen shot image. Determining that if the geometric envelope map has an apex angle that coincides with a vertex angle of the screen shot image, or determines that a longitudinal central axis of the geometric envelope map coincides with a longitudinal central axis of the screen capture image
  • the suspicious black image area belongs to the black image area of the screen shot image. The following part will further explain the method for identifying the black image area provided by the embodiment in conjunction with FIG. 2c and FIG. 2d.
  • FIG. 2c is a schematic diagram of a geometric envelope diagram for acquiring a suspicious black image in a screen capture image according to an embodiment of the present invention.
  • the five shaded areas in Fig. 2c are suspect black image areas
  • the geometric envelope Fig. 1 - geometric envelope Fig. 5 are the geometric envelope diagrams corresponding to the suspect black image areas in Fig. 2c, respectively.
  • the four apex angles of the geometric envelope 1 coincide with the apex angle of the screen image
  • the longitudinal center axis of the geometric envelope 1 coincides with the longitudinal center axis of the screen image. Therefore, it can be considered that the suspicious black image area corresponding to the geometric envelope diagram 1 is the black image area of the screen shot image.
  • Geometric Envelope Figure 2 - Geometry Envelope Figure 5 does not coincide with the apex angle of the screen image, and the geometric envelope diagram 2 - the longitudinal center axis of the geometric envelope diagram 5 does not correspond to the longitudinal center axis of the screen image coincide. Therefore, it can be considered that the suspicious black image area corresponding to the geometric envelope diagram 1 belongs to the content image area of the screen capture image.
  • FIG. 2d is another schematic diagram of acquiring a geometric envelope diagram of a suspicious black image area in a screen capture image according to an embodiment of the present invention.
  • the degree of distortion of the content image area is large, and the boundary of the content image area is tangent to the boundary of the screen image.
  • the geometric envelope diagram corresponding to A1-A4 has a apex angle coincident with the apex angle of the screen image, and the suspect black image area A1-A4 may be considered to belong to the black image area of the screen capture image.
  • the longitudinal central axes of A5 and A6 coincide with the longitudinal central axis of the screen image, and the suspect black image areas A5 and A6 may be considered to belong to the black image area of the screen image.
  • the apex angle of the A7-A10 does not coincide with the apex angle of the screen image, and any longitudinal central axis does not coincide with the longitudinal center axis of the screen image. Therefore, the suspicious black image area A7-A10 can be considered to belong to the content image area of the screen shot image.
  • the black image area is divided into two left and right sub-image areas along the longitudinal center axis.
  • the application to be tested is installed on the VR device or installed on the mobile phone embedded in the VR device for the user to view with both eyes. Therefore, the longer side of the captured image of the application to be tested is the side parallel to the direction in which the left and right eyes of the user are connected.
  • the longitudinal center axis is a central axis perpendicular to the longer side of the screen image, and the longitudinal center axis is capable of dividing the screen image into two equal parts. That is to say, if the lengths of the screen images are w and h, respectively, the longitudinal center axis is along the line direction of (w/2, 0) and (w/2, h).
  • the black image area is divided into two parts, the horizontal central axis of the screen image is the horizontal axis x, and the longitudinal central axis of the screen image is the vertical axis y.
  • pixels having the same ordinate and opposite abscissa are acquired as symmetric pixel points, such as pixel points A (-x1, y1) and pixel points B in FIG. 2e ( X1, y1).
  • the number of symmetric pixel points having the same gray value is counted and the average of the total number of pixel points included in the two sub-image areas is obtained.
  • the similarity ratio P2 of the two sub-image regions may be determined according to a ratio of the number of symmetric pixel points having the same gray value to the average of the total number of pixel points.
  • the calculation formula of the similarity rate can be as follows:
  • M is the number of symmetric pixel points with the same gray value
  • Ci is the number of pixel points of the i-th black region.
  • n is the number of black image regions contained in the two sub-image regions.
  • the similarity rate is greater than a set similarity threshold, it is determined that the black image area is symmetrical along a longitudinal center axis of the screen shot image.
  • the set similarity threshold may be 99%, that is, when P2 is greater than 99%, it is determined that the black image area is symmetrical along the longitudinal central axis of the screen image.
  • the application to be identified is a VR application.
  • This method is not limited by the package name or name of the application, and has high recognition accuracy and high efficiency for the VR application.
  • the non-VR application can be quickly identified and the user is promptly reminded to enhance the user experience.
  • the electronic device includes a memory 301 and a processor 302.
  • the memory 301 is configured to: store one or more instructions.
  • the processor 302 is configured to invoke to execute the one or more instructions for: acquiring, from the four corners of the interface image of the application to be identified, at least four reference image blocks of a specified size; according to the at least four references a gray value of a pixel included in the image block, determining whether the at least four reference image blocks are black image blocks; if the at least four image blocks are black image blocks, according to the pixels included in the interface image a gray value of the point, identifying a black image area in the interface image; determining, according to a gray value of the pixel point included in the black image area, whether the left and right portions of the black image area correspond; if the black image The two parts of the area correspond to each other, and then the application to be identified is determined to be a VR application.
  • the processor 302 is specifically configured to: acquire, in the at least four reference image blocks, a number of pixels whose gray value is smaller than a specified gray threshold; and calculate the gray value to be smaller than a specified gray threshold.
  • the processor 302 is specifically configured to: select, by using four vertices of the interface image, a starting point of an image, and select an image area of a specified size on the interface image to obtain the at least four As the reference image block.
  • the processor 302 is specifically configured to: in the interface image, obtain an area where a pixel point whose gray value is smaller than the specified gray level threshold is used as a suspicious black image area; according to the suspect black image The coordinates of the pixel points included in the region, and the geometry of the suspect black image region is obtained.
  • An envelope map if the geometric envelope map has an apex angle coincident with a vertex angle of the interface image or a longitudinal central axis of the geometric envelope map coincides with a longitudinal central axis of the interface image, determining the The suspicious black image area belongs to the black image area of the interface image.
  • determining whether the left and right portions of the black image area correspond to each other according to the gray value of the pixel point included in the black image area comprises: according to the gray value of the pixel point included in the black image area, It is determined whether the black image area is symmetrical along a longitudinal center axis of the interface image.
  • the processor 302 is specifically configured to: divide the black image area into two left and right sub-image areas along a longitudinal central axis of the interface image; and total pixels according to the two sub-image areas Calculating a similarity rate of the two sub-image regions by the number of symmetric pixel points having the same number and gray value; and determining the black image region along the longitudinal direction of the interface image if the similarity ratio is greater than a set similarity threshold
  • the central axis is symmetrical.
  • the processor 302 is specifically configured to: establish a coordinate system by using a horizontal central axis of the interface image as an axis of abscissa and a longitudinal axis of the interface image as an ordinate axis; In the sub-image area, obtain pixel points with the same ordinate and opposite abscissa as symmetric pixel points; count the number of symmetric pixel points with the same gray value and obtain the average of the total number of pixels included in the two sub-image areas a value; determining a similarity ratio of the two sub-image regions according to a ratio of the number of symmetric pixel points having the same gray value to the average of the total number of pixel points.
  • the application to be identified is determined to be a VR application. This method is not limited by the package name or name of the application, and the recognition efficiency of the VR application is high.
  • the electronic device may be an external head mounted display device or an integrated head mounted display device, wherein the external head mounted display device needs to be used in conjunction with an external processing system (eg, a computer processing system).
  • an external processing system eg, a computer processing system
  • FIG. 4 shows a schematic diagram of the internal configuration of the head mounted display device 400 in some embodiments.
  • the display unit 401 may include a display panel that is disposed on the head mounted display device 400
  • the side surface facing the user's face may be a whole panel or a left panel and a right panel corresponding to the left and right eyes of the user, respectively.
  • the display panel may be an electroluminescence (EL) element, a liquid crystal display or a microdisplay having a similar structure, or a laser-scanned display in which the retina may be directly displayed or similar.
  • EL electroluminescence
  • the virtual image optical unit 402 photographs the image displayed by the display unit 401 in an enlarged manner and allows the user to observe the displayed image in the enlarged virtual image.
  • the display image outputted to the display unit 401 it may be an image of a virtual scene supplied from a content reproduction device (a Blu-ray disc or a DVD player) or a streaming server, or an image of a real scene photographed using an external camera 410.
  • virtual image optical unit 402 can include a lens unit, such as a spherical lens, an aspheric lens, a Fresnel lens, and the like.
  • the input operation unit 403 includes at least one operation member for performing an input operation, such as a button, a button, a switch, or other similarly functioned component, receives a user instruction through the operation member, and outputs an instruction to the control unit 407.
  • an input operation such as a button, a button, a switch, or other similarly functioned component
  • the status information acquisition unit 404 is configured to acquire status information of the user wearing the head mounted display device 400.
  • the status information acquisition unit 404 may include various types of sensors for detecting status information by itself, and may acquire status information from an external device such as a smartphone, a wristwatch, and other multi-function terminals worn by the user through the communication unit 405.
  • the status information acquisition unit 404 can acquire location information and/or posture information of the user's head.
  • the status information acquisition unit 404 may include one or more of a gyro sensor, an acceleration sensor, a global positioning system (GPS) sensor, a geomagnetic sensor, a Doppler effect sensor, an infrared sensor, and a radio frequency field intensity sensor.
  • GPS global positioning system
  • the state information acquisition unit 404 acquires state information of the user wearing the head-mounted display device 400, for example, acquires, for example, an operation state of the user (whether the user wears the head-mounted display device 400), an action state of the user (such as standing, walking, running) And the state of movement such as the state of the hand or fingertip, the open or closed state of the eye, the direction of the line of sight, the size of the pupil, the mental state (whether the user is immersed in observing the displayed image, and the like), or even the physiological state.
  • an operation state of the user whether the user wears the head-mounted display device 400
  • an action state of the user such as standing, walking, running
  • the state of movement such as the state of the hand or fingertip, the open or closed state of the eye, the direction of the line of sight, the size of the pupil, the mental state (whether the user is immersed in observing the displayed image, and the like), or even the physiological state.
  • the communication unit 405 performs communication processing, modulation and demodulation processing with an external device, and encoding and decoding processing of the communication signal.
  • the control unit 407 can be outward from the communication unit 405
  • the device transmits transmission data.
  • the communication method may be wired or wireless, such as mobile high-definition link (MHL) or universal serial bus (USB), high-definition multimedia interface (HDMI), wireless fidelity (Wi-Fi), Bluetooth communication, or low-power Bluetooth communication. And the mesh network of the IEEE802.11s standard.
  • communication unit 405 can be a cellular wireless transceiver that operates in accordance with Wideband Code Division Multiple Access (W-CDMA), Long Term Evolution (LTE), and the like.
  • W-CDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • the head mounted display device 400 can also include a storage unit, which is a mass storage device configured to have a solid state drive (SSD) or the like.
  • storage unit 406 can store applications or various types of data. For example, content viewed by the user using the head mounted display device 400 may be stored in the storage unit 406.
  • the head mounted display device 400 can also include a control unit, and the control unit 407 can include a computer processing unit (CPU) or other device having similar functionality.
  • control unit 407 can be used to execute an application stored by storage unit 406, or control unit 407 can also be used to perform the methods, functions, and operations disclosed in some embodiments of the present application.
  • the image processing unit 408 is for performing signal processing such as image quality correction related to the image signal output from the control unit 407, and converting its resolution into a resolution according to the screen of the display unit 401. Then, the display driving unit 409 sequentially selects each row of pixels of the display unit 401, and sequentially scans each row of pixels of the display unit 401 line by line, thereby providing pixel signals based on the signal-processed image signals.
  • the head mounted display device 400 can also include an external camera.
  • the external camera 410 may be disposed on the front surface of the body of the head mounted display device 400, and the external camera 410 may be one or more.
  • the external camera 410 can acquire three-dimensional information and can also be used as a distance sensor.
  • a position sensitive detector (PSD) or other type of distance sensor that detects reflected signals from the object can be used with the external camera 410.
  • PSD position sensitive detector
  • the external camera 410 and the distance sensor can be used to detect the body position, posture, and shape of the user wearing the head mounted display device 400. In addition, under certain conditions, the user can directly view or preview the real scene through the external camera 410.
  • the head mounted display device 400 may further include a sound processing unit, the sound portion
  • the processing unit 411 can perform sound quality correction or sound amplification of the sound signal output from the control unit 407, signal processing of the input sound signal, and the like. Then, the sound input/output unit 412 outputs the sound to the outside and the sound from the microphone after the sound processing.
  • the structure or component illustrated by the dashed line in FIG. 4 may be independent of the head mounted display device 400, for example, may be disposed in an external processing system (eg, a computer system) for use with the head mounted display device 400; or The structure or components shown by the dashed box may be disposed inside or on the surface of the head mounted display device 400.
  • an external processing system eg, a computer system
  • the embodiments of the electronic device described above are merely illustrative, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, ie Located in one place, or distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.

Abstract

L'invention concerne un procédé de reconnaissance d'un programme d'application VR et un dispositif électronique. Le procédé consiste : à acquérir, à partir de quatre coins d'une image d'interface d'un programme d'application à reconnaître, au moins quatre blocs d'image de référence de tailles définies ; à déterminer, en fonction de valeurs de niveaux de gris de points de pixel inclus dans lesdits blocs d'image de référence, si lesdits blocs d'image de référence sont tous des blocs d'image noirs ; si lesdits blocs d'image sont tous des blocs d'image noirs, à reconnaître, en fonction de valeurs de niveaux de gris de points de pixel inclus dans l'image d'interface, une zone d'image noire dans l'image d'interface ; à déterminer, en fonction de valeurs de niveaux de gris de points de pixel inclus dans la zone d'image noire, si des parties gauche et droite de la zone d'image noire correspondent l'une à l'autre ; et si les parties gauche et droite de la zone d'image noire correspondent l'une à l'autre, à déterminer que le programme d'application à reconnaître est un programme d'application VR. Au moyen de la solution technique décrite dans la présente invention, un programme d'application VR peut être efficacement reconnu parmi un grand nombre de programmes d'application.
PCT/CN2017/103193 2017-08-16 2017-09-25 Procédé de reconnaissance d'un programme d'application vr et dispositif électronique WO2019033510A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710703519.X 2017-08-16
CN201710703519.XA CN107506031B (zh) 2017-08-16 2017-08-16 一种vr应用程序的识别方法及电子设备

Publications (1)

Publication Number Publication Date
WO2019033510A1 true WO2019033510A1 (fr) 2019-02-21

Family

ID=60692124

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/103193 WO2019033510A1 (fr) 2017-08-16 2017-09-25 Procédé de reconnaissance d'un programme d'application vr et dispositif électronique

Country Status (2)

Country Link
CN (1) CN107506031B (fr)
WO (1) WO2019033510A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11189054B2 (en) * 2018-09-28 2021-11-30 Apple Inc. Localization and mapping using images from multiple devices

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025789A (zh) * 2007-03-29 2007-08-29 上海大学 基于计算提取数字图像选择性骨骼面积特征的数字识别方法
US20140214921A1 (en) * 2013-01-31 2014-07-31 Onavo Mobile Ltd. System and method for identification of an application executed on a mobile device
CN104317574A (zh) * 2014-09-30 2015-01-28 北京金山安全软件有限公司 识别应用程序类型的方法和装置
CN106095432A (zh) * 2016-06-07 2016-11-09 北京小鸟看看科技有限公司 一种识别应用类型的方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102821298B (zh) * 2012-08-27 2015-06-17 深圳市维尚视界立体显示技术有限公司 一种3d播放调节自适应的方法、装置和设备
US10353474B2 (en) * 2015-09-28 2019-07-16 Microsoft Technology Licensing, Llc Unified virtual reality platform
CN105892639A (zh) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 一种对虚拟现实vr设备进行控制的方法及设备
CN106780521B (zh) * 2016-12-08 2020-01-07 广州视源电子科技股份有限公司 屏幕漏光的检测方法、系统和装置
CN106982389B (zh) * 2017-03-17 2022-01-07 腾讯科技(深圳)有限公司 视频类型识别方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101025789A (zh) * 2007-03-29 2007-08-29 上海大学 基于计算提取数字图像选择性骨骼面积特征的数字识别方法
US20140214921A1 (en) * 2013-01-31 2014-07-31 Onavo Mobile Ltd. System and method for identification of an application executed on a mobile device
CN104317574A (zh) * 2014-09-30 2015-01-28 北京金山安全软件有限公司 识别应用程序类型的方法和装置
CN106095432A (zh) * 2016-06-07 2016-11-09 北京小鸟看看科技有限公司 一种识别应用类型的方法

Also Published As

Publication number Publication date
CN107506031B (zh) 2020-03-31
CN107506031A (zh) 2017-12-22

Similar Documents

Publication Publication Date Title
US11893689B2 (en) Automated three dimensional model generation
US9406137B2 (en) Robust tracking using point and line features
US20150262412A1 (en) Augmented reality lighting with dynamic geometry
CN115699114B (zh) 用于分析的图像增广的方法和装置
KR20210071015A (ko) 재투영된 프레임을 위한 모션 스무딩
US20210165993A1 (en) Neural network training and line of sight detection methods and apparatus, and electronic device
US9355436B2 (en) Method, system and computer program product for enhancing a depth map
JP2018537748A (ja) 可変の計算量を用いた画像のライトフィールドレンダリング
CN107560637A (zh) 头戴显示设备校准结果验证方法及头戴显示设备
CN107704397B (zh) 应用程序测试方法、装置及电子设备
WO2022087846A1 (fr) Procédé et appareil de traitement d'image, dispositif, et support de stockage
US11726320B2 (en) Information processing apparatus, information processing method, and program
WO2024055531A1 (fr) Procédé d'identification de valeur d'illuminomètre, dispositif électronique et support de stockage
WO2019033510A1 (fr) Procédé de reconnaissance d'un programme d'application vr et dispositif électronique
US9282317B2 (en) Method and apparatus for processing an image and generating information representing the degree of stereoscopic effects
WO2018214492A1 (fr) Procédé et appareil de traitement de données d'expérience utilisateur, dispositif électronique et support de stockage informatique
US11847784B2 (en) Image processing apparatus, head-mounted display, and method for acquiring space information
CN107545595A (zh) 一种vr场景处理方法及vr设备
TW201430767A (zh) 自動判斷3d影像格式的方法
US20160217559A1 (en) Two-dimensional image processing based on third dimension data
CN107705311B (zh) 图像轮廓的内外识别方法及设备
CN108965859B (zh) 投影方式识别方法、视频播放方法、装置及电子设备
US11361511B2 (en) Method, mixed reality system and recording medium for detecting real-world light source in mixed reality
US11782666B1 (en) Aggregate view of image information on display devices
US11323682B2 (en) Electronic device, content processing device, content processing system, image data output method, and image processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17921986

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17921986

Country of ref document: EP

Kind code of ref document: A1