CN113587812B - Display equipment, measuring method and device - Google Patents

Display equipment, measuring method and device Download PDF

Info

Publication number
CN113587812B
CN113587812B CN202110860108.8A CN202110860108A CN113587812B CN 113587812 B CN113587812 B CN 113587812B CN 202110860108 A CN202110860108 A CN 202110860108A CN 113587812 B CN113587812 B CN 113587812B
Authority
CN
China
Prior art keywords
pixel
target
determining
image
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110860108.8A
Other languages
Chinese (zh)
Other versions
CN113587812A (en
Inventor
马乐
杨鲁明
刘兆磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202110860108.8A priority Critical patent/CN113587812B/en
Publication of CN113587812A publication Critical patent/CN113587812A/en
Application granted granted Critical
Publication of CN113587812B publication Critical patent/CN113587812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides an embodiment, belongs to the technology of display, and provides a display device, a measuring method and a device, wherein the display device comprises a display, a camera configured to collect images containing depth information, and a controller which is not connected with the display and the camera respectively, and is configured to: acquiring depth information, spatial position information and pixel position information of an image, determining an actual spatial distance corresponding to a unit pixel on a target part of an object contained in the image according to the pixel position information and the spatial position information of the target pixel in the image, determining a measurement pixel corresponding to the measurement size in the image according to the depth information of the image and the pixel position information of the image, and determining the size of the target part according to the actual spatial distance corresponding to the measurement pixel and the unit pixel. The application can accurately obtain the size of the target part of the object contained in the image, and realize the measurement of the depth information of the base color, such as the measurement of a human body.

Description

Display equipment, measuring method and device
Technical Field
The present application relates to data processing technology. And more particularly, to a display apparatus, a measuring method and a device.
Background
With the continuous progress of science and technology, display devices are integrating more and more functions, and develop towards intelligentization. For example, more and more display devices have been equipped with cameras to provide richer intelligent functionality, and camera-related applications have become a new business growth point and focus of user experience attention.
At present, the display device can acquire depth information of a shooting object through the configured 3D camera, and acquire three-dimensional data in a shooting scene in real time, which is equivalent to adding object perception capability to the display device, so that more application scenes can be entered, and novel interesting interaction experience or convenient and practical functions can be provided for users. But currently no depth information based measurement methods, such as anthropometric measurements, etc., are provided.
Disclosure of Invention
Exemplary embodiments of the present application provide a display apparatus, a measurement method, and a device to implement measurement based on depth information, such as human body measurement, and the like.
In a first aspect, an embodiment of the present application provides a display apparatus, including:
a display;
a camera configured to collect an image containing depth information;
and a controller coupled to the display and the camera, respectively, the controller configured to:
Acquiring depth information, spatial position information and pixel position information of an image, wherein the spatial position information is used for representing the position information of pixels in the image in an actual space, and the pixel position information is used for representing the position information of the pixels in the image;
determining an actual spatial distance corresponding to a unit pixel on a target part of an object contained in an image according to pixel position information and spatial position information of the target pixel in the image, wherein the target pixel is a pixel on the target part;
according to the depth information of the image and the pixel position information of the image, determining a measurement pixel corresponding to the measurement size in the image, wherein the measurement size is the size corresponding to the target part;
and determining the size of the target part according to the actual space distance corresponding to the measurement pixel and the unit pixel.
In some possible implementations, the target pixel includes a first target pixel and a second target pixel, and the controller is specifically configured to, when determining, according to pixel position information and spatial position information of the target pixel in the image, an actual spatial distance corresponding to a unit pixel on a target portion of an object included in the image: determining a first difference value according to pixel position information corresponding to a first target pixel and a second target pixel in the image, wherein the first difference value is used for representing the distance between the first target pixel and the second target pixel in the image; determining a second difference value according to the spatial position information corresponding to the first target pixel and the second target pixel in the image, wherein the second difference value is used for representing the distance between the first target pixel and the second target pixel in the actual space; and determining the actual space distance corresponding to the unit pixel on the target part of the object contained in the image as the ratio of the second difference value to the first difference value.
In some possible implementations, the controller is configured to, when determining the first difference value according to pixel position information corresponding to the first target pixel and the second target pixel in the image, specifically: and determining a first difference value according to the pixel coordinates of the first target pixel in the direction of the first preset coordinate axis and the pixel coordinates of the second target pixel in the direction of the first preset coordinate axis, wherein the pixel coordinates are used for representing the pixel position information.
In some possible implementations, the controller is configured to, when determining the second difference value according to spatial position information corresponding to the first target pixel and the second target pixel in the image, specifically: and determining a second difference value according to the world coordinate of the first target pixel in the second preset coordinate axis direction and the world coordinate of the second target pixel in the second preset coordinate axis direction, wherein the world coordinate is used for representing the space position information.
In some possible implementations, the controller is configured to, when determining a measurement pixel corresponding to the measurement size in the image according to the depth information of the image and the pixel position information of the image, specifically: determining a center pixel on the corresponding target part according to the pixel position information of the first target pixel and/or the pixel position information of the second target pixel; determining a plurality of candidate pixels according to the pixel position information of the central pixel, wherein the plurality of candidate pixels are identical to the pixel coordinates of the central pixel in the direction of a third preset coordinate axis; and determining a corresponding measurement pixel of the measurement size in the image according to the central pixel and the plurality of candidate pixels.
In some possible implementations, the controller, when configured to determine a corresponding measurement pixel of the measurement size in the image from the center pixel, the plurality of candidate pixels, is specifically configured to: determining a depth difference value according to the depth of the candidate pixel and the depth of the center pixel; determining a coordinate difference value according to pixel coordinates respectively corresponding to the candidate pixel and the fourth preset coordinate axis direction of the center pixel; determining an actual coordinate difference value as a product of the coordinate difference value and an actual space distance corresponding to the unit pixel; if the depth difference is smaller than or equal to the first threshold and the actual coordinate difference is smaller than or equal to the second threshold, determining the candidate pixel as a measurement pixel corresponding to the measurement size in the image.
In some possible implementations, the controller is specifically configured to, when determining the size of the target portion according to the actual spatial distance corresponding to the measurement pixel and the unit pixel: if the target part is not in the preset shape, determining that the size of the target part is the product of the number of the measurement pixels and the actual space distance corresponding to the unit pixels; if the target part is in the preset shape, determining the size of the target part according to the depth of the measurement pixel and the actual space distance corresponding to the unit pixel.
In some possible implementations, the controller is specifically configured to, when determining the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel: determining the length of a line segment formed by each two adjacent pixels according to the difference value of the depths of each two adjacent pixels in the measured pixels and the actual space distance corresponding to the unit pixel; and determining the size of the target part according to the sum of the lengths of the line segments.
In some possible implementations, the preset shapes include a first preset shape and a second preset shape, and the controller is specifically configured to, when determining the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel: determining a first size corresponding to the target part according to the depth of the measurement pixel corresponding to the first preset shape and the actual space distance corresponding to the unit pixel; determining a second size corresponding to the target part according to the depth of the measurement pixel corresponding to the second preset shape and the actual space distance corresponding to the unit pixel; the size of the target site is determined to be the sum of the first size and the second size.
In a second aspect, an embodiment of the present application provides a measurement method, applied to a display device, including:
Acquiring depth information, spatial position information and pixel position information of an image, wherein the spatial position information is used for representing the position information of pixels in the image in an actual space, and the pixel position information is used for representing the position information of the pixels in the image;
determining an actual spatial distance corresponding to a unit pixel on a target part of an object contained in an image according to pixel position information and spatial position information of the target pixel in the image, wherein the target pixel is a pixel on the target part;
according to the depth information of the image and the pixel position information of the image, determining a measurement pixel corresponding to the measurement size in the image, wherein the measurement size is the size corresponding to the target part;
and determining the size of the target part according to the actual space distance corresponding to the measurement pixel and the unit pixel.
In some possible implementations, the target pixels include a first target pixel and a second target pixel, and determining, according to pixel position information and spatial position information of the target pixels in the image, an actual spatial distance corresponding to a unit pixel on a target portion of an object included in the image includes: determining a first difference value according to pixel position information corresponding to a first target pixel and a second target pixel in the image, wherein the first difference value is used for representing the distance between the first target pixel and the second target pixel in the image; determining a second difference value according to the spatial position information corresponding to the first target pixel and the second target pixel in the image, wherein the second difference value is used for representing the distance between the first target pixel and the second target pixel in the actual space; and determining the actual space distance corresponding to the unit pixel on the target part of the object contained in the image as the ratio of the second difference value to the first difference value.
In some possible implementations, determining the first difference according to pixel position information corresponding to the first target pixel and the second target pixel in the image includes: and determining a first difference value according to the pixel coordinates of the first target pixel in the direction of the first preset coordinate axis and the pixel coordinates of the second target pixel in the direction of the first preset coordinate axis, wherein the pixel coordinates are used for representing the pixel position information.
In some possible implementations, determining the second difference according to spatial position information corresponding to the first target pixel and the second target pixel in the image includes: and determining a second difference value according to the world coordinate of the first target pixel in the second preset coordinate axis direction and the world coordinate of the second target pixel in the second preset coordinate axis direction, wherein the world coordinate is used for representing the space position information.
In some possible implementations, determining a corresponding measurement pixel of the measurement size in the image according to the depth information of the image and the pixel position information of the image includes: determining a center pixel on the corresponding target part according to the pixel position information of the first target pixel and/or the pixel position information of the second target pixel; determining a plurality of candidate pixels according to the pixel position information of the central pixel, wherein the plurality of candidate pixels are identical to the pixel coordinates of the central pixel in the direction of a third preset coordinate axis; and determining a corresponding measurement pixel of the measurement size in the image according to the central pixel and the plurality of candidate pixels.
In some possible implementations, determining a corresponding measurement pixel of the measurement size in the image from the center pixel, the plurality of candidate pixels, includes: determining a depth difference value according to the depth of the candidate pixel and the depth of the center pixel; determining a coordinate difference value according to pixel coordinates respectively corresponding to the candidate pixel and the fourth preset coordinate axis direction of the center pixel; determining an actual coordinate difference value as a product of the coordinate difference value and an actual space distance corresponding to the unit pixel; if the depth difference is smaller than or equal to the first threshold and the actual coordinate difference is smaller than or equal to the second threshold, determining the candidate pixel as a measurement pixel corresponding to the measurement size in the image.
In some possible implementations, determining the size of the target site according to the actual spatial distance corresponding to the measurement pixel and the unit pixel includes: if the target part is not in the preset shape, determining that the size of the target part is the product of the number of the measurement pixels and the actual space distance corresponding to the unit pixels; if the target part is in the preset shape, determining the size of the target part according to the depth of the measurement pixel and the actual space distance corresponding to the unit pixel.
In some possible implementations, determining the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel includes: determining the length of a line segment formed by each two adjacent pixels according to the difference value of the depths of each two adjacent pixels in the measured pixels and the actual space distance corresponding to the unit pixel; and determining the size of the target part according to the sum of the lengths of the line segments.
In some possible implementations, the preset shapes include a first preset shape and a second preset shape, and determining the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel includes: determining a first size corresponding to the target part according to the depth of the measurement pixel corresponding to the first preset shape and the actual space distance corresponding to the unit pixel; determining a second size corresponding to the target part according to the depth of the measurement pixel corresponding to the second preset shape and the actual space distance corresponding to the unit pixel; the size of the target site is determined to be the sum of the first size and the second size.
In a third aspect, an embodiment of the present application provides a measurement apparatus applied to a display device, including:
the acquisition module is used for acquiring depth information, spatial position information and pixel position information of the image, wherein the spatial position information is used for representing the position information of pixels in the image in an actual space, and the pixel position information is used for representing the position information of the pixels in the image;
the first determining module is used for determining the actual space distance corresponding to the unit pixel on the target part of the object contained in the image according to the pixel position information and the space position information of the target pixel in the image, wherein the target pixel is the pixel on the target part;
The second determining module is used for determining a measuring pixel corresponding to the measuring size in the image according to the depth information of the image and the pixel position information of the image, wherein the measuring size is the size corresponding to the target part;
and the processing module is used for determining the size of the target part according to the actual space distance corresponding to the measurement pixel and the unit pixel.
In some possible implementations, the target pixels include a first target pixel and a second target pixel, and the first determining module is specifically configured to: determining a first difference value according to pixel position information corresponding to a first target pixel and a second target pixel in the image, wherein the first difference value is used for representing the distance between the first target pixel and the second target pixel in the image; determining a second difference value according to the spatial position information corresponding to the first target pixel and the second target pixel in the image, wherein the second difference value is used for representing the distance between the first target pixel and the second target pixel in the actual space; and determining the actual space distance corresponding to the unit pixel on the target part of the object contained in the image as the ratio of the second difference value to the first difference value.
In some possible implementations, the first determining module is configured to, when determining the first difference value according to pixel position information corresponding to the first target pixel and the second target pixel in the image, specifically: and determining a first difference value according to the pixel coordinates of the first target pixel in the direction of the first preset coordinate axis and the pixel coordinates of the second target pixel in the direction of the first preset coordinate axis, wherein the pixel coordinates are used for representing the pixel position information.
In some possible implementations, the first determining module is configured to, when determining the second difference value according to spatial position information corresponding to the first target pixel and the second target pixel in the image, specifically: and determining a second difference value according to the world coordinate of the first target pixel in the second preset coordinate axis direction and the world coordinate of the second target pixel in the second preset coordinate axis direction, wherein the world coordinate is used for representing the space position information.
In some possible implementations, the second determining module is specifically configured to: determining a center pixel on the corresponding target part according to the pixel position information of the first target pixel and/or the pixel position information of the second target pixel; determining a plurality of candidate pixels according to the pixel position information of the central pixel, wherein the plurality of candidate pixels are identical to the pixel coordinates of the central pixel in the direction of a third preset coordinate axis; and determining a corresponding measurement pixel of the measurement size in the image according to the central pixel and the plurality of candidate pixels.
In some possible implementations, the second determining module, when configured to determine, from the center pixel, a plurality of candidate pixels, a corresponding measurement pixel of the measurement size in the image, is specifically configured to: determining a depth difference value according to the depth of the candidate pixel and the depth of the center pixel; determining a coordinate difference value according to pixel coordinates respectively corresponding to the candidate pixel and the fourth preset coordinate axis direction of the center pixel; determining an actual coordinate difference value as a product of the coordinate difference value and an actual space distance corresponding to the unit pixel; if the depth difference is smaller than or equal to the first threshold and the actual coordinate difference is smaller than or equal to the second threshold, determining the candidate pixel as a measurement pixel corresponding to the measurement size in the image.
In some possible implementations, the processing module is specifically configured to: if the target part is not in the preset shape, determining that the size of the target part is the product of the number of the measurement pixels and the actual space distance corresponding to the unit pixels; if the target part is in the preset shape, determining the size of the target part according to the depth of the measurement pixel and the actual space distance corresponding to the unit pixel.
In some possible implementations, the processing module is configured to determine the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel, and is specifically configured to: determining the length of a line segment formed by each two adjacent pixels according to the difference value of the depths of each two adjacent pixels in the measured pixels and the actual space distance corresponding to the unit pixel; and determining the size of the target part according to the sum of the lengths of the line segments.
In some possible implementations, the preset shapes include a first preset shape and a second preset shape, and the processing module is specifically configured to, when determining the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel: determining a first size corresponding to the target part according to the depth of the measurement pixel corresponding to the first preset shape and the actual space distance corresponding to the unit pixel; determining a second size corresponding to the target part according to the depth of the measurement pixel corresponding to the second preset shape and the actual space distance corresponding to the unit pixel; the size of the target site is determined to be the sum of the first size and the second size.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored therein computer program instructions which, when executed, implement any of the measurement methods according to the second aspect of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements any of the measurement methods according to the second aspect of the present application.
According to the display equipment, the measuring method and the device, the depth information, the spatial position information and the pixel position information of the image are acquired, the actual spatial distance corresponding to the unit pixel on the target part of the object contained in the image is determined according to the pixel position information and the spatial position information of the target pixel in the image, the measuring pixel corresponding to the measuring size in the image is determined according to the depth information of the image and the pixel position information of the image, and the size of the target part is determined according to the actual spatial distance corresponding to the measuring pixel and the unit pixel. According to the embodiment of the application, the measuring pixels with the measuring size corresponding to the image are determined based on the depth information of the image, and the size of the target part is determined according to the actual space distance corresponding to the measuring pixels and the unit pixels, so that the size of the target part of the object contained in the image can be accurately obtained, and the measurement based on the depth information, such as human body measurement, is realized.
These and other aspects of the application will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
Drawings
In order to more clearly illustrate the embodiments of the present application or the implementation of the related art, the drawings that are required for the embodiments or the related art description will be briefly described, and it is apparent that the drawings in the following description are some embodiments of the present application and that other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
Fig. 1 is a schematic diagram of an interaction scenario between a display device and a camera according to an embodiment of the present application;
FIG. 2 is a block diagram of a hardware configuration of a display device according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a software system of a display device according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a pixel coordinate system according to an embodiment of the present application;
FIG. 5 is a diagram of a camera coordinate system according to an embodiment of the present application;
FIG. 6 is a flow chart of a measurement method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of key points of human skeleton according to an embodiment of the present application;
FIG. 8 is a flow chart of a measurement method according to another embodiment of the present application;
FIG. 9a is a schematic diagram of measuring a maximum shoulder width of a human body according to an embodiment of the present application;
FIG. 9b is a schematic view of measuring a maximum shoulder width of a human body according to another embodiment of the present application;
FIG. 9c is a schematic view of a measurement of a maximum shoulder width of a human body according to another embodiment of the present application;
fig. 9d is a schematic diagram of a measurement pixel corresponding to a human waist circumference in an image according to an embodiment of the present application;
fig. 10 is a schematic diagram of measuring waistline of a human body according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a measurement device according to an embodiment of the application.
Detailed Description
For the purposes of making the objects, embodiments and advantages of the present application more apparent, an exemplary embodiment of the present application will be described more fully hereinafter with reference to the accompanying drawings in which exemplary embodiments of the application are shown, it being understood that the exemplary embodiments described are merely some, but not all, of the examples of the application.
Based on the exemplary embodiments described herein, all other embodiments that may be obtained by one of ordinary skill in the art without making any inventive effort are within the scope of the appended claims. Furthermore, while the present disclosure has been described in terms of an exemplary embodiment or embodiments, it should be understood that each aspect of the disclosure can be practiced separately from the other aspects.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms "first," second, "" third and the like in the description and in the claims and in the above drawings are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated (Unless otherwise indicated). It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" as used in this disclosure refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the function associated with that element.
The term "remote control" as used herein refers to a component of an electronic device (such as a display device as disclosed herein) that can be controlled wirelessly, typically over a relatively short distance. Typically, the electronic device is connected to the electronic device using infrared and/or Radio Frequency (RF) signals and/or bluetooth, and may also include functional modules such as WiFi, wireless USB, bluetooth, motion sensors, etc. For example: the hand-held touch remote controller replaces most of the physical built-in hard keys in a general remote control device with a touch screen user interface.
The term "gesture" as used herein refers to a user action by a change in hand shape or hand movement, etc., used to express an intended idea, action, purpose, or result.
Fig. 1 is a schematic diagram of an interaction scenario between a display device and a camera according to an embodiment of the present application. As shown in fig. 1, the display apparatus 200 measures a human body from an image including the human body obtained by the camera 100, displays a measured size of a target portion of the human body, such as a maximum shoulder width of the human body, a waistline of the human body, and the like, wherein the camera 100 is capable of acquiring an image including depth information, such as a 3D camera.
As also shown in fig. 1, the display device 200 is also in data communication with the server 400 via a variety of communication means. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. By way of example, display device 200 receives software program updates, or accesses a remotely stored digital media library by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers. Other web service content such as video on demand and advertising services are provided through the server 400.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device. The particular display device type, size, resolution, etc. are not limited, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired.
The display apparatus 200 may additionally provide a smart network television function of a computer support function, including, but not limited to, a network television, a smart television, an Internet Protocol Television (IPTV), etc., in addition to the broadcast receiving television function.
Fig. 2 is a block diagram of a hardware configuration of a display device according to an embodiment of the present application. As shown in fig. 2, in some embodiments, at least one of the controller 250, the modem 210, the communicator 220, the detector 230, the input/output interface 255, the display 275, the audio output interface 285, the memory 260, the power supply 290, the user interface 265, the external device interface 240 is included in the display apparatus 200.
In some embodiments, the display 275 is configured to receive image signals from the first processor output, and to display video content and images and components of the menu manipulation interface.
In some embodiments, display 275 includes a display screen assembly for presenting pictures, and a drive assembly for driving the display of images.
In some embodiments, the video content is displayed from broadcast television content, or alternatively, from various broadcast signals that may be received via a wired or wireless communication protocol. Alternatively, various image contents received from the network server side transmitted from the network communication protocol may be displayed.
In some embodiments, the display 275 is used to present a user-manipulated UI interface generated in the display device 200 and used to control the display device 200.
In some embodiments, depending on the type of display 275, a drive assembly for driving the display is also included.
In some embodiments, display 275 is a projection display and may further include a projection device and a projection screen.
In some embodiments, communicator 220 is a component for communicating with external devices or external servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi chip, a bluetooth communication protocol chip, a wired ethernet communication protocol chip, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver.
In some embodiments, the display device 200 may establish control signal and data signal transmission and reception between the communicator 220 and an external control device or a content providing device.
In some embodiments, the user interface 265 may be used to receive infrared control signals from a control device (e.g., an infrared remote control, etc.).
In some embodiments, the detector 230 is a signal that the display device 200 uses to capture or interact with the external environment.
In some embodiments, the detector 230 includes an optical receiver, a sensor for capturing the intensity of ambient light, a parameter change may be adaptively displayed by capturing ambient light, etc.
In some embodiments, the detector 230 may further include an image collector, such as a camera, a video camera, etc., which may be used to collect external environmental scenes, collect attributes of a user or interact with a user, adaptively change display parameters, and recognize a user gesture to realize an interaction function with the user.
In some embodiments, the detector 230 may also include a temperature sensor or the like, such as by sensing ambient temperature.
In some embodiments, the display device 200 may adaptively adjust the display color temperature of the image. The display device 200 may be adjusted to display a colder color temperature shade of the image, such as when the temperature is higher, or the display device 200 may be adjusted to display a warmer color shade of the image when the temperature is lower.
In some embodiments, the detector 230 may also be a sound collector or the like, such as a microphone, that may be used to receive the user's sound. Illustratively, a voice signal including a control instruction for a user to control the display apparatus 200, or an acquisition environmental sound is used to recognize an environmental scene type so that the display apparatus 200 can adapt to environmental noise.
In some embodiments, as shown in fig. 2, the input/output interface 255 is configured to enable data transfer between the controller 250 and external other devices or other controllers 250. Such as receiving video signal data and audio signal data of an external device, command instruction data, or the like.
In some embodiments, external device interface 240 may include, but is not limited to, the following: any one or more interfaces of a high definition multimedia interface HDMI interface, an analog or data high definition component input interface, a composite video input interface, a USB input interface, an RGB port, and the like can be used. The plurality of interfaces may form a composite input/output interface.
In some embodiments, as shown in fig. 2, the modem 210 is configured to receive the broadcast television signal by a wired or wireless receiving manner, and may perform modulation and demodulation processes such as amplification, mixing, and resonance, and demodulate the audio/video signal from the plurality of wireless or wired broadcast television signals, where the audio/video signal may include a television audio/video signal carried in a television channel frequency selected by a user, and an EPG data signal.
In some embodiments, the frequency point demodulated by the modem 210 is controlled by the controller 250, and the controller 250 may send a control signal according to the user selection, so that the modem responds to the television signal frequency selected by the user and modulates and demodulates the television signal carried by the frequency.
In some embodiments, the broadcast television signal may be classified into a terrestrial broadcast signal, a cable broadcast signal, a satellite broadcast signal, an internet broadcast signal, or the like according to a broadcasting system of the television signal. Or may be differentiated into digital modulation signals, analog modulation signals, etc., depending on the type of modulation. Or it may be classified into digital signals, analog signals, etc. according to the kind of signals.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like. In this way, the set-top box outputs the television audio and video signals modulated and demodulated by the received broadcast television signals to the main body equipment, and the main body equipment receives the audio and video signals through the first input/output interface.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 may control the overall operation of the display apparatus 200. For example: in response to receiving a user command to select to display a UI object on the display 275, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, such as: displaying an operation of connecting to a hyperlink page, a document, an image, or the like, or executing an operation of a program corresponding to the icon. The user command for selecting the UI object may be an input command through various input means (e.g., mouse, keyboard, touch pad, etc.) connected to the display device 200 or a voice command corresponding to a voice uttered by the user.
As shown in fig. 2, the controller 250 includes at least one of a random access Memory 251 (Random Access Memory, RAM), a Read-Only Memory 252 (ROM), a video processor 270, an audio processor 280, other processors 253 (e.g., a graphics processor (Graphics Processing Unit, GPU), a processor 254 (Central Processing Unit, CPU), a communication interface (Communication Interface), and a communication Bus 256 (Bus), which connects the respective components.
In some embodiments, RAM 251 is used to store temporary data for the operating system or other on-the-fly programs, and in some embodiments ROM 252 is used to store various system boot instructions.
In some embodiments, ROM 252 is used to store a basic input output system, referred to as a basic input output system (Basic Input Output System, BIOS). The system comprises a drive program and a boot operating system, wherein the drive program is used for completing power-on self-checking of the system, initialization of each functional module in the system and basic input/output of the system.
In some embodiments, upon receipt of the power-on signal, the display device 200 power starts up, the CPU runs system boot instructions in the ROM 252, copies temporary data of the operating system stored in memory into the RAM 251, in order to start up or run the operating system. When the operating system is started, the CPU copies temporary data of various applications in the memory to the RAM 251, and then, facilitates starting or running of the various applications.
In some embodiments, CPU processor 254 is used to execute operating system and application program instructions stored in memory. And executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents.
In some exemplary embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include one main processor and one or more sub-processors. A main processor for performing some operations of the display apparatus 200 in the pre-power-up mode and/or displaying a picture in the normal mode. One or more sub-processors for one operation in a standby mode or the like.
In some embodiments, the graphics processor 253 is configured to generate various graphical objects, such as: icons, operation menus, user input instruction display graphics, and the like. The device comprises an arithmetic unit, wherein the arithmetic unit is used for receiving various interaction instructions input by a user to carry out operation and displaying various objects according to display attributes. And a renderer for rendering the various objects obtained by the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, video processor 270 is configured to receive external video signals, perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image composition, etc., according to standard codec protocols for input signals, and may result in signals that are displayed or played on directly displayable device 200.
In some embodiments, video processor 270 includes a demultiplexing module, a video decoding module, an image compositing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio/video data stream, such as the input MPEG-2, and demultiplexes the input audio/video data stream into video signals, audio signals and the like.
And the video decoding module is used for processing the demultiplexed video signals, including decoding, scaling and the like.
And an image synthesis module, such as an image synthesizer, for performing superposition mixing processing on the graphic generator and the video image after the scaling processing according to the GUI signal input by the user or generated by the graphic generator, so as to generate an image signal for display.
The frame rate conversion module is configured to convert the input video frame rate, for example, converting the 60Hz frame rate into the 120Hz frame rate or the 240Hz frame rate, and the common format is implemented in an inserting frame manner.
The display format module is used for converting the received frame rate into a video output signal, and changing the video output signal to a signal conforming to the display format, such as outputting an RGB data signal.
In some embodiments, the graphics processor 253 may be integrated with the video processor, or may be separately configured, where the integrated configuration may perform processing of graphics signals output to the display, and the separate configuration may perform different functions, such as gpu+ FRC (Frame Rate Conversion)) architecture, respectively.
In some embodiments, the audio processor 280 is configured to receive an external audio signal, decompress and decode the audio signal according to a standard codec protocol of an input signal, and perform noise reduction, digital-to-analog conversion, and amplification processing, so as to obtain a sound signal that can be played in a speaker.
In some embodiments, video processor 270 may include one or more chips. The audio processor may also comprise one or more chips.
In some embodiments, video processor 270 and audio processor 280 may be separate chips or integrated with the controller in one or more chips.
In some embodiments, the audio output, under the control of the controller 250, receives sound signals output by the audio processor 280, such as: the speaker 286, and an external sound output terminal that can be output to a generating device of an external device, other than the speaker carried by the display device 200 itself, such as: external sound interface or earphone interface, etc. can also include the close range communication module in the communication interface, for example: and the Bluetooth module is used for outputting sound of the Bluetooth loudspeaker.
The power supply 290 supplies power input from an external power source to the display device 200 under the control of the controller 250. The power supply 290 may include a built-in power circuit installed inside the display device 200, or may be an external power source installed in the display device 200, and a power interface for providing an external power source in the display device 200.
The user interface 265 is used to receive an input signal from a user and then transmit the received user input signal to the controller 250. The user input signal may be a remote control signal received through an infrared receiver, and various user control signals may be received through a network communication module.
In some embodiments, a user inputs a user command through a control device or a mobile terminal, the user input interface is responsive to user input through the controller 250, and the display apparatus 200 is responsive to user input.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 275, and the user input interface receives the user input command through the graphical user interface (GUl). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
The memory 260 includes memory storing various software modules for driving the display device 200. Such as: various software modules stored in the first memory, including: at least one of a base module, a detection module, a communication module, a display control module, a browser module, various service modules, and the like.
The base module is a bottom software module for signal communication between the various hardware in the display device 200 and for sending processing and control signals to the upper modules. The detection module is used for collecting various information from various sensors or user input interfaces and carrying out digital-to-analog conversion and analysis management.
For example, the voice recognition module includes a voice analysis module and a voice instruction database module. The display control module is used for controlling the display to display the image content, and can be used for playing the multimedia image content, the UI interface and other information. And the communication module is used for carrying out control and data communication with external equipment. And the browser module is used for executing data communication between the browsing servers. And the service module is used for providing various services and various application programs. Meanwhile, the memory 260 also stores received external data and user data, images of various items in various user interfaces, visual effect maps of focus objects, and the like.
Fig. 3 is a schematic software system of a display device according to an embodiment of the application. Referring to FIG. 3, in some embodiments, the system is divided into four layers, from top to bottom, an application layer (simply "application layer"), an application framework layer (Application Framework layer) (simply "framework layer"), an Zhuoyun rows (Android run) and a system library layer (simply "system runtime layer"), and a kernel layer, respectively.
In some embodiments, at least one application program is running in the application program layer, and these application programs may be a Window (Window) program of an operating system, a system setting program, a clock program, a camera application, and the like; and may be an application program developed by a third party developer, such as a hi-see program, a K-song program, a magic mirror program, etc. In particular implementations, the application packages in the application layer are not limited to the above examples, and may actually include other application packages, which the embodiments of the present application do not limit.
The framework layer provides an Application programming interface (Application ProgrammingInterface, API) and programming framework for the Application programs of the Application layer. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act. An application program can access resources in a system and acquire services of the system in execution through an API interface
As shown in fig. 3, the application framework layer in the embodiment of the present application includes a manager (manager), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an activity manager (actiginemanager) is used to interact with all activities that are running in the system; a Location Manager (Location Manager) is used to provide system services or applications with access to system Location services; a Package Manager (Package Manager) for retrieving various information about an application Package currently installed on the device; a notification manager (notifinmanager) for controlling display and clearing of notification messages; a Window Manager (Window Manager) is used to manage bracketing icons, windows, toolbars, wallpaper, and desktop components on the user interface.
In some embodiments, the activity manager is to: the lifecycle of each application program is managed, as well as the usual navigation rollback functions, such as controlling the exit of the application program (including switching the currently displayed user interface in the display window to the system desktop), opening, backing (including switching the currently displayed user interface in the display window to the previous user interface of the currently displayed user interface), etc.
In some embodiments, the window manager is configured to manage all window procedures, such as obtaining a display screen size, determining whether there is a status bar, locking the screen, intercepting the screen, controlling display window changes (e.g., scaling the display window down, dithering, distorting, etc.), and so on.
In some embodiments, the system runtime layer provides support for the upper layer, the framework layer, and when the framework layer is in use, the android operating system runs the C/C++ libraries contained in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 3, the kernel layer contains at least one of the following drivers: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, touch sensor, pressure sensor, etc.), and the like.
In some embodiments, the kernel layer further includes a power driver module for power management.
In some embodiments, the software programs and/or modules corresponding to the software architecture in fig. 3 are stored in the first memory or the second memory shown in fig. 2.
In some embodiments, taking a magic mirror application (photographing application) as an example, when the remote control receiving device receives an input operation of the remote control, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the input operation into the original input event (including the value of the input operation, the timestamp of the input operation, etc.). The original input event is stored at the kernel layer. The application program framework layer acquires an original input event from the kernel layer, identifies a control corresponding to the input event according to the current position of the focus and takes the input operation as a confirmation operation, wherein the control corresponding to the confirmation operation is a control of a magic mirror application icon, the magic mirror application calls an interface of the application framework layer, the magic mirror application is started, and further, a camera driver is started by calling the kernel layer, so that a still image or video is captured through a camera.
In some embodiments, for a display device with a touch function, taking a split screen operation as an example, the display device receives an input operation (such as a split screen operation) acted on a display screen by a user, and the kernel layer may generate a corresponding input event according to the input operation and report the event to the application framework layer. The window mode (e.g., multi-window mode) and window position and size corresponding to the input operation are set by the activity manager of the application framework layer. And window management of the application framework layer draws a window according to the setting of the activity manager, then the drawn window data is sent to a display driver of the kernel layer, and the display driver displays application interfaces corresponding to the window data in different display areas of the display screen.
First, some technical terms related to the present application will be explained:
a depth map, i.e. an image with the distance (also called depth) from an image collector (such as a 3D camera) to points in the scene as pixel values, corresponds to depth information of a spatial midpoint of each pixel in the depth map.
The pixel coordinate system, i.e. the coordinate system in pixels established with the upper left corner of the image as the origin. For example, fig. 4 is a schematic diagram of a pixel coordinate system provided in an embodiment of the present application, as shown in fig. 4, with an upper left corner of an image corresponding to a preview image of a camera as an origin o of coordinates, an x axis to the right, and a y axis to the down, pixels in a depth map captured by the preview camera may use the pixel coordinate system. It should be noted that, each pixel in the image shot by the common color camera contains an RGB color value, and an object in the visual angle of the camera can be recorded, so that the whole image is displayed as a space image actually seen by the human eye. The 3D camera can shoot the space distance information, and each shot pixel contains one piece of length information, namely the distance from the corresponding point in the shot space to the camera. The image taken by the 3D camera may employ a pixel coordinate system with each pixel (x d ,y d ) Representing that each pixel corresponds to a distance length, also called depth, specifically denoted as: depth=d (x d ,y d )。
The camera coordinate system is a three-dimensional rectangular coordinate system established by taking the focusing center of the camera as an origin and taking the optical axis as a z axis. Fig. 5 is a schematic diagram of a camera coordinate system according to an embodiment of the present application, as shown in fig. 5, the optical center c of the lens of the camera is taken as the origin o of coordinates, the z axis is taken as the optical axis of the lens of the camera, the y axis is parallel to the image plane upwards, the x axis is parallel to the image plane to the right, and the right rule is followed.
The display device can provide more and more intelligent functions through the configured camera, and camera related applications have become new service growth points and focus of user experience. For example, social televisions with lift cameras as hardware dominant features can provide six-way video calls as well as artificial intelligence (Artificial Intelligence, al) image processing. At present, the display device can acquire depth information of a shooting object through the configured 3D camera, and acquire three-dimensional data in a shooting scene in real time, which is equivalent to increasing object perception capability for the display device, so that more application scenes can be entered, and novel interesting interaction experience or convenient and practical functions such as 3D fitting and man-machine interaction can be provided for users. But currently no depth information based measurement methods, such as anthropometric measurements, etc., are provided.
In view of the above, the present application provides a display device, a measurement method, and a device, which can accurately obtain a size of a target portion of a human body by acquiring an image including a human body through a camera capable of acquiring an image including depth information and measuring the target portion of the human body based on the depth information of the image.
In the following examples, the measurement of the maximum shoulder width of a human body and the waist circumference of a human body will be described as examples.
Fig. 6 is a flowchart of a measurement method according to an embodiment of the application, which is applied to a display device. As shown in fig. 6, the controller in the display device is configured to perform the steps of:
in S601, depth information, spatial position information, and pixel position information of an image are acquired.
The spatial position information is used for representing the position information of the pixels in the image in the actual space, and the pixel position information is used for representing the position information of the pixels in the image.
In an embodiment of the present application, referring to fig. 1, for example, a camera 100 can acquire an image containing depth information, and the camera 100 is, for example, a 3D camera. The image shot by the 3D camera contains depth information, the image is a depth map, and the depth map contains a human body. The 3D camera can identify the position of the human body in the scene according to a preset computer vision algorithm (refer to the related art at present, and the embodiment of the present application will not be repeated), and the position of the human body is represented by a plurality of skeletal key points, for example. For example, fig. 7 is a schematic diagram of skeletal keys of a human body according to an embodiment of the present application, and as shown in fig. 7, the human body includes 19 skeletal keys of a head, a left shoulder, a right shoulder, a shoulder spine, a chest spine, a middle spine, a waist spine, a left elbow, a right elbow, a left wrist, a right wrist, a left hand, a right hand, a left crotch, a right crotch, a left knee, a right knee, a left foot, and a right foot. The information of each bone key point (joint) contains the coordinates of each bone key point in a camera coordinate system, and can also be Called world coordinates, using (x w ,y w ,z w ) The representation includes pixel coordinates of each skeletal keypoint in a depth map in a pixel coordinate system (i.e. (x) d ,y d ) As well as the depth of each skeletal keypoint. The world coordinates of each bone key point are the spatial position information of each bone key point, and the pixel coordinates of each bone key point are the pixel position information of each bone key point. The 3D camera outputs depth information, spatial position information and pixel position information of the image according to the shot image, namely, the depth information, the spatial position information and the pixel position information corresponding to each pixel in the image are output; accordingly, the controller acquires depth information, spatial position information, and pixel position information of the image from the 3D camera.
In S602, an actual spatial distance corresponding to a unit pixel at a target portion of an object included in an image is determined based on pixel position information and spatial position information of the target pixel in the image.
Wherein the target pixel is a pixel on the target site.
In this step, the image includes an object such as a human body, a target portion such as a shoulder, a waist, or the like of the human body, and a target pixel such as a pixel on the shoulder or the waist of the human body, for example. After the pixel position information and the spatial position information of the image containing the human body are obtained, the actual spatial distance corresponding to the unit pixel on the shoulder of the human body contained in the image can be determined according to the pixel position information and the spatial position information of the target pixel on the shoulder of the human body in the image. Alternatively, the actual spatial distance corresponding to the unit pixel on the waist of the human body included in the image may be determined according to the pixel position information and the spatial position information of the target pixel on the waist in the image. For determining the actual spatial distance corresponding to the unit pixel on the target portion of the object included in the image according to the pixel position information and the spatial position information of the target pixel in the image, reference may be made to related techniques or subsequent embodiments, which are not described herein.
In S603, a measurement pixel corresponding to the measurement size in the image is determined from the depth information of the image and the pixel position information of the image.
The measurement size is the size corresponding to the target part.
Illustratively, the target portion is, for example, a shoulder of a human body, and the measurement size is, for example, a maximum shoulder width of the human body, and the target portion is, for example, a waist of the human body, and the measurement size is, for example, a waistline of the human body. For example, after depth information of an image including a human body and pixel position information of the image are obtained, a measurement pixel corresponding to the maximum shoulder width of the human body in the image may be determined according to the depth information of the image and the pixel position information of the image, or a measurement pixel corresponding to the waistline of the human body in the image may be determined. For how to determine the measurement pixels corresponding to the measurement size in the image according to the depth information of the image and the pixel position information of the image, reference may be made to related techniques or subsequent embodiments, which are not described herein.
In S604, the size of the target portion is determined based on the actual spatial distance corresponding to the measurement pixel and the unit pixel.
After the actual spatial distance corresponding to the measurement pixel and the unit pixel is obtained, the size of the target portion may be determined according to the actual spatial distance corresponding to the measurement pixel and the unit pixel. For example, the human body maximum shoulder width may be determined according to an actual spatial distance corresponding to a measurement pixel corresponding to the human body maximum shoulder width in the image and a unit pixel on the human body shoulder. For example, the human waistline may be determined from the actual spatial distance corresponding to the measurement pixels of the human waistline in the image and the unit pixels on the human waist. For how to determine the size of the target portion according to the actual spatial distance corresponding to the measurement pixel and the unit pixel, reference may be made to the related art or the subsequent embodiments, which will not be described herein.
After the size of the target site is obtained, the display apparatus 200 may display the size of the target site on the display interface or apply the obtained size of the target site to different application scenarios, such as 3D fitting.
According to the measuring method provided by the embodiment of the application, the depth information, the spatial position information and the pixel position information of the image are acquired, the actual spatial distance corresponding to the unit pixel on the target part of the object contained in the image is determined according to the pixel position information and the spatial position information of the target pixel in the image, the measuring pixel corresponding to the measuring size in the image is determined according to the depth information of the image and the pixel position information of the image, and the size of the target part is determined according to the actual spatial distance corresponding to the measuring pixel and the unit pixel. According to the embodiment of the application, the measuring pixels with the measuring size corresponding to the image are determined based on the depth information of the image, and the size of the target part is determined according to the actual space distance corresponding to the measuring pixels and the unit pixels, so that the size of the target part of the object contained in the image can be accurately obtained, and the measurement based on the depth information, such as human body measurement, is realized.
The measurement method provided by the embodiment of the application is described in detail below with reference to specific steps.
Fig. 8 is a flowchart of a measurement method according to another embodiment of the present application. As shown in fig. 8, the controller in the display device is configured to perform the steps of:
in S801, depth information, spatial position information, and pixel position information of an image are acquired.
The spatial position information is used for representing the position information of the pixels in the image in the actual space, and the pixel position information is used for representing the position information of the pixels in the image.
The specific implementation process of this step may be referred to as related description of S601, which is not repeated here.
Optionally, the target pixels include a first target pixel and a second target pixel.
Illustratively, referring to fig. 7, the first target pixel is, for example, a pixel corresponding to a left shoulder of a human body, and the second target pixel is, for example, a pixel corresponding to a right shoulder of the human body. Fig. 9a is a schematic diagram of measuring a maximum shoulder width of a human body according to an embodiment of the present application, where, as shown in fig. 9a, one pixel corresponding to a left shoulder of the human body is 901, which is a first target pixel, and one pixel corresponding to a right shoulder of the human body is 902, which is a second target pixel. Illustratively, referring to fig. 7, the first target pixel is, for example, a pixel corresponding to the human lumbar spine, and the second target pixel is, for example, a pixel corresponding to the human lumbar spine.
In the embodiment of the present application, the step S602 in fig. 6 may be further refined into the following three steps S802 to S804:
in S802, a first difference is determined according to pixel position information corresponding to a first target pixel and a second target pixel in an image.
Wherein the first difference value is used to characterize the distance of the first target pixel and the second target pixel in the image.
In this step, after the pixel position information of the image is obtained, the pixel position information corresponding to the first target pixel and the second target pixel in the image may be determined, and therefore, the first difference may be determined according to the pixel position information corresponding to the first target pixel and the second target pixel in the image.
Further, when the controller is configured to determine the first difference value according to pixel position information corresponding to the first target pixel and the second target pixel in the image, the controller may be specifically configured to: and determining a first difference value according to the pixel coordinates of the first target pixel in the direction of the first preset coordinate axis and the pixel coordinates of the second target pixel in the direction of the first preset coordinate axis, wherein the pixel coordinates are used for representing the pixel position information.
Illustratively, the first preset coordinate axis direction is, for example, the x-axis direction of the pixel coordinate system of the image, and referring to fig. 9a, the first target pixel 901 is left. X for the pixel coordinates in the x-axis direction of the pixel coordinate system d The second target pixel 902 is represented as right d The first difference is |right.x d -left.x d | a. The invention relates to a method for producing a fibre-reinforced plastic composite. For example, a first preset coordinate axis direction is, for example, a y-axis direction of a pixel coordinate system of an image, referring to fig. 7, a first target pixel is, for example, a pixel corresponding to a middle spine of a human body, and a second target pixel is, for example, a pixel corresponding to a waist spine of a human body, and then the pixel coordinates of the first target pixel in the y-axis direction of the pixel coordinate system are up.y d Representing the pixel coordinate of the second target pixel in the y-axis direction of the pixel coordinate system d The first difference is |Down. Y d -up.y d |。
In S803, a second difference is determined according to spatial position information corresponding to the first target pixel and the second target pixel in the image, respectively.
Wherein the second difference value is used to characterize the distance of the first target pixel and the second target pixel in real space.
In this step, after the spatial position information of the image is obtained, the spatial position information corresponding to the first target pixel and the second target pixel in the image may be determined, so that the second difference may be determined according to the spatial position information corresponding to the first target pixel and the second target pixel in the image.
Further, when the controller is configured to determine the second difference value according to the spatial position information corresponding to the first target pixel and the second target pixel in the image, the controller may be specifically configured to: and determining a second difference value according to the world coordinate of the first target pixel in the second preset coordinate axis direction and the world coordinate of the second target pixel in the second preset coordinate axis direction, wherein the world coordinate is used for representing the space position information.
Illustratively, the second preset coordinate axis direction is, for example, the x-axis direction of the camera coordinate system of the 3D camera, referring to fig. 9a, the world coordinate of the first target pixel 901 in the x-axis direction of the camera coordinate system is left w The world coordinate of the second target pixel 902 in the x-axis direction of the camera coordinate system is represented by right w The second difference is |right.x w -left.x w | a. The invention relates to a method for producing a fibre-reinforced plastic composite. For example, the second preset coordinate axis direction is, for example, the y-axis direction of the camera coordinate system of the 3D camera, referring to fig. 7, the first target pixel is, for example, a pixel corresponding to the middle spine of the human body, and the second target pixel is, for example, a pixel corresponding to the waist spine of the human body, so that the world coordinate of the first target pixel in the y-axis direction of the camera coordinate system is up.y w Representing the world coordinate of the second target pixel in the y-axis direction of the camera coordinate system w Representing that the second difference is |up.y w -down.y w |。
In S804, it is determined that the actual spatial distance corresponding to the unit pixel on the target portion of the object included in the image is a ratio of the second difference value to the first difference value.
Illustratively, referring to FIG. 9a, a first difference value of |right.x is determined based on pixel location information corresponding to a first target pixel 901 and a second target pixel 902, respectively d -left.x d The second difference value is determined to be |right.x according to the spatial position information corresponding to the first target pixel 901 and the second target pixel 902 respectively w -left.x w The actual spatial distance m= |right x corresponding to the unit pixel on the shoulder of the human body contained in the image can be determined w -left.x w |/|right.x d -left.x d | a. The invention relates to a method for producing a fibre-reinforced plastic composite. Exemplary, referring to FIG. 7, a first target pixel, such as a pixel corresponding to the human lumbar spine, and a second target pixel, such as a pixel corresponding to the human lumbar spine, determine a first difference value of |Down. Y based on pixel position information corresponding to the first target pixel and the second target pixel, respectively d -up.y d According to the spatial position information corresponding to the first target pixel 901 and the second target pixel 902, a second difference value of |up.y is determined w -down.y w The actual spatial distance m= |up y corresponding to the unit pixel on the shoulder of the human body contained in the image can be determined w -down.y w |/|down.y d -up.y d |。
In the embodiment of the present application, the step S603 in fig. 6 may be further refined into the following three steps S805 to S807:
in S805, a center pixel on the corresponding target portion is determined according to the pixel position information of the first target pixel and/or the pixel position information of the second target pixel.
Illustratively, referring to FIG. 9a, a first target pixel 901 is used for the pixel coordinates (left. X d ,left.y d ) The second target pixel 902 is represented in a pixel coordinate (right. X d ,right.y d ) Representing, then, the pixel coordinates of the center pixel 903 on the corresponding shoulder in the pixel coordinate system may be determined as: center. X= (left. X) d +right.x d )/2,center.y=(left.y d +right.y d ). For example, referring to fig. 7, a first target pixel is, for example, a pixel corresponding to a spine in a middle portion of a human body, and a second target pixel is, for example, a pixel corresponding to a spine in a waist portion of a human body, and the second target pixel is determined to be a center pixel on the waist portion of the human body.
In S806, a plurality of candidate pixels are determined according to the pixel position information of the center pixel, and the plurality of candidate pixels are identical to the pixel coordinates of the center pixel in the third preset coordinate axis direction.
In this step, illustratively, the third preset coordinate axis direction is, for example, the y-axis direction of the pixel coordinate system, and referring to fig. 9a, the pixel coordinates of the center pixel determined according to the first target pixel 901 and the second target pixel 902 in the pixel coordinate system are: center. X= (left. X) d +right.x d )/2,center.y=(left.y d +right.y d ) A plurality of candidate pixels having the same y-coordinate, center.y, as the center pixel may be acquired in the depth map, which may be understood as a series of pixel coordinate points having the same horizontal height as the center pixel. Fig. 9b is a schematic diagram of measuring a maximum shoulder width of a human body according to another embodiment of the present application, as shown in fig. 9b, a plurality of candidate pixels 904 and a center pixel 903 have the same horizontal height. Exemplary, referring to FIG. 7, a first target pixel, such as a pixel corresponding to the human lumbar spine, a second target pixel, such as a pixel corresponding to the human lumbar spine, and a second target pixel for pixel coordinates in a pixel coordinate system (Down. X d ,down.y d ) In this way, after determining that the second target pixel is the center pixel on the waist of the human body, the depth map may be obtained to have the same y coordinate as the center pixel, i.e., down d Is included in the image data.
In S807, a measurement pixel corresponding to the measurement size in the image is determined from the center pixel, the plurality of candidate pixels.
In this step, after the center pixel and the plurality of candidate pixels are obtained, a measurement pixel corresponding to the measurement size in the image may be determined from the center pixel and the plurality of candidate pixels.
Further, the controller, when configured to determine a measurement pixel corresponding to the measurement size in the image according to the center pixel and the plurality of candidate pixels, may be specifically configured to: determining a depth difference value according to the depth of the candidate pixel and the depth of the center pixel; determining a coordinate difference value according to pixel coordinates respectively corresponding to the candidate pixel and the fourth preset coordinate axis direction of the center pixel; determining an actual coordinate difference value as a product of the coordinate difference value and an actual space distance corresponding to the unit pixel; if the depth difference is smaller than or equal to the first threshold and the actual coordinate difference is smaller than or equal to the second threshold, determining the candidate pixel as a measurement pixel corresponding to the measurement size in the image.
Illustratively, the first threshold is, for example, 20cm, and the second threshold is, for example, 40cm, and may be adjusted as desired according to the target site, which is not a limitation of the present application. For each candidate pixel, a depth difference value may be determined according to the depth of the candidate pixel and the depth of the center pixel; the fourth preset coordinate axis direction is, for example, the x-axis direction of the pixel coordinate system, and then the coordinate difference value can be determined according to the x-coordinates of the candidate pixel and the center pixel respectively corresponding to the x-axis direction of the pixel coordinate system; taking the product of the coordinate difference value and the actual space distance (i.e. m) corresponding to the unit pixel obtained in the above embodiment as the actual coordinate difference value of the candidate pixel and the center pixel; if the depth difference is less than or equal to 20cm and the actual coordinate difference is less than or equal to 40cm, determining the candidate pixel as a measurement pixel corresponding to the measurement size in the image. It will be appreciated that if the depth difference is greater than 20cm, or the actual coordinate difference exceeds 40cm, then it may be determined that the candidate pixel is not the corresponding measurement pixel in the image for the measurement size. Fig. 9c is a schematic diagram of measuring a maximum shoulder width of a human body according to another embodiment of the present application, and as shown in fig. 9c, a measurement pixel 905 corresponding to the maximum shoulder width of the human body in an image is shown. Fig. 9d is a schematic diagram of a measurement pixel corresponding to a human waistline in an image according to an embodiment of the present application, and as shown in fig. 9d, the measurement pixel 906 corresponding to the human waistline in the image is shown.
In the embodiment of the present application, the step S604 in fig. 6 may be further refined into two steps S808 and S809 as follows:
in S808, if the target portion is not in the preset shape, the size of the target portion is determined as the product of the number of measurement pixels and the actual spatial distance corresponding to the unit pixel.
The predetermined shape is, for example, an arc shape. If the target part is a shoulder of a human body, the shoulder of the human body shot by the camera is not arc-shaped, and the maximum shoulder width of the human body can be determined as the product of the number of measurement pixels and the actual space distance corresponding to the unit pixels obtained in the embodiment. Wherein, the number of the measurement pixels can be obtained by the following way: assuming n measurement pixels, the x-coordinate of the first measurement pixel in the pixel coordinate system is x 1 The x-coordinate of the nth measurement pixel in the pixel coordinate system is represented by x n The number of measurement pixels is |x n -x 1 | a. The application relates to a method for producing a fibre-reinforced plastic composite. Therefore, the size b= |x of the target site n -x 1 |×m。
In S809, if the target portion is a preset shape, the size of the target portion is determined according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel.
The predetermined shape is, for example, an arc shape. If the target part is a human waist, the human waist shot by the camera is arc-shaped, and the size of the target part is determined according to the depth of the measurement pixel and the actual space distance corresponding to the unit pixel.
Further, when the controller is configured to determine the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel, the controller may be specifically configured to: determining the length of a line segment formed by each two adjacent pixels according to the difference value of the depths of each two adjacent pixels in the measured pixels and the actual space distance corresponding to the unit pixel; and determining the size of the target part according to the sum of the lengths of the line segments.
Fig. 10 is a schematic diagram of measuring waistline of a human body according to an embodiment of the present application, wherein the target portion is a waist of the human body, the waist of the human body photographed by the camera approximates an irregular arc formed by the pixels a as shown in fig. 10 1 To a n Composition, pixel a 1 To a n The points corresponding to the calibration planes of the cameras are respectively p 1 To p n Representing pixel a 1 To a n Depth of (d) 1 To d n And (3) representing. Length of line segment formed by every two adjacent pixelsWherein i is more than or equal to 1 and less than n. And determining the size of the target part according to the sum of the lengths of line segments formed by every two adjacent pixels in the measurement pixels.
Further, the preset shapes include a first preset shape and a second preset shape, and the controller may be specifically configured to, when determining the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel: determining a first size corresponding to the target part according to the depth of the measurement pixel corresponding to the first preset shape and the actual space distance corresponding to the unit pixel; determining a second size corresponding to the target part according to the depth of the measurement pixel corresponding to the second preset shape and the actual space distance corresponding to the unit pixel; the size of the target site is determined to be the sum of the first size and the second size.
The first preset shape is an arc corresponding to the waist of the front face of the human body, and the second preset shape is an arc corresponding to the waist of the back face of the human body. The arc length of the front waist of the human body can be determined according to the depth of the measurement pixel corresponding to the first preset shape and the actual space distance corresponding to the unit pixel; determining the arc length of the waist of the back of the human body according to the depth of the measurement pixel corresponding to the second preset shape and the actual space distance corresponding to the unit pixel; the sum of the arc length of the front waist of the human body and the arc length of the back waist of the human body is the waistline of the human body.
According to the measuring method provided by the embodiment of the application, the depth information, the spatial position information and the pixel position information of the image are obtained, the first difference value is determined according to the pixel position information respectively corresponding to the first target pixel and the second target pixel in the image, the second difference value is determined according to the spatial position information respectively corresponding to the first target pixel and the second target pixel in the image, the actual spatial distance corresponding to the unit pixel on the target part of the object contained in the image is determined as the ratio of the second difference value to the first difference value, the central pixel on the corresponding target part is determined according to the pixel position information of the first target pixel and/or the pixel position information of the second target pixel, the plurality of candidate pixels are determined according to the pixel position information of the central pixel, the plurality of candidate pixels are identical to the pixel coordinates of the central pixel in the third preset coordinate axis direction, the measuring pixels corresponding to the measuring size in the image are determined according to the central pixel and the plurality of candidate pixels, if the target part is not in the preset shape, the size of the measuring pixels is determined as the product of the actual spatial distance corresponding to the unit pixel on the target part, and if the target part is in the preset shape, the actual spatial distance corresponding to the unit pixel is determined according to the pixel. According to the embodiment of the application, the measuring pixels with the measuring size corresponding to the image are determined based on the depth information of the image, and the size of the target part is determined according to the actual space distance corresponding to the measuring pixels and the unit pixels, so that the size of the target part of the object contained in the image can be accurately obtained, and the measurement based on the depth information, such as human body measurement, is realized.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 11 is a schematic structural diagram of a measuring device according to an embodiment of the present application, where the measuring device is applied to a display apparatus. As shown in fig. 11, a measurement device 1100 provided by an embodiment of the present application includes: an acquisition module 1101, a first determination module 1102, a second determination module 1103 and a processing module 1104.
The obtaining module 1101 is configured to obtain depth information, spatial position information, and pixel position information of an image, where the spatial position information is used to represent position information of pixels in the image in an actual space, and the pixel position information is used to represent position information of pixels in the image.
The first determining module 1102 is configured to determine an actual spatial distance corresponding to a unit pixel on a target portion of an object included in an image according to pixel position information and spatial position information of the target pixel in the image, where the target pixel is a pixel on the target portion.
The second determining module 1103 is configured to determine, according to the depth information of the image and the pixel position information of the image, a measurement pixel corresponding to the measurement size in the image, where the measurement size is a size corresponding to the target portion.
The processing module 1104 is configured to determine a size of the target portion according to the actual spatial distance corresponding to the measurement pixel and the unit pixel.
In some possible implementations, the target pixels include a first target pixel and a second target pixel, and the first determining module 1102 may be specifically configured to: determining a first difference value according to pixel position information corresponding to a first target pixel and a second target pixel in the image, wherein the first difference value is used for representing the distance between the first target pixel and the second target pixel in the image; determining a second difference value according to the spatial position information corresponding to the first target pixel and the second target pixel in the image, wherein the second difference value is used for representing the distance between the first target pixel and the second target pixel in the actual space; and determining the actual space distance corresponding to the unit pixel on the target part of the object contained in the image as the ratio of the second difference value to the first difference value.
In some possible implementations, the first determining module 1102 may be specifically configured to, when configured to determine the first difference value according to pixel position information corresponding to the first target pixel and the second target pixel in the image: and determining a first difference value according to the pixel coordinates of the first target pixel in the direction of the first preset coordinate axis and the pixel coordinates of the second target pixel in the direction of the first preset coordinate axis, wherein the pixel coordinates are used for representing the pixel position information.
In some possible implementations, the first determining module 1102 may be specifically configured to, when configured to determine the second difference value according to spatial position information corresponding to the first target pixel and the second target pixel in the image: and determining a second difference value according to the world coordinate of the first target pixel in the second preset coordinate axis direction and the world coordinate of the second target pixel in the second preset coordinate axis direction, wherein the world coordinate is used for representing the space position information.
In some possible implementations, the second determining module 1103 may be specifically configured to: determining a center pixel on the corresponding target part according to the pixel position information of the first target pixel and/or the pixel position information of the second target pixel; determining a plurality of candidate pixels according to the pixel position information of the central pixel, wherein the plurality of candidate pixels are identical to the pixel coordinates of the central pixel in the direction of a third preset coordinate axis; and determining a corresponding measurement pixel of the measurement size in the image according to the central pixel and the plurality of candidate pixels.
In some possible implementations, the second determining module 1103, when configured to determine, from the center pixel and the plurality of candidate pixels, a measurement pixel corresponding to the measurement size in the image, may be specifically configured to: determining a depth difference value according to the depth of the candidate pixel and the depth of the center pixel; determining a coordinate difference value according to pixel coordinates respectively corresponding to the candidate pixel and the fourth preset coordinate axis direction of the center pixel; determining an actual coordinate difference value as a product of the coordinate difference value and an actual space distance corresponding to the unit pixel; if the depth difference is smaller than or equal to the first threshold and the actual coordinate difference is smaller than or equal to the second threshold, determining the candidate pixel as a measurement pixel corresponding to the measurement size in the image.
In some possible implementations, the processing module 1104 may be specifically configured to: if the target part is not in the preset shape, determining that the size of the target part is the product of the number of the measurement pixels and the actual space distance corresponding to the unit pixels; if the target part is in the preset shape, determining the size of the target part according to the depth of the measurement pixel and the actual space distance corresponding to the unit pixel.
In some possible implementations, the processing module 1104, when configured to determine the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel, may be specifically configured to: determining the length of a line segment formed by each two adjacent pixels according to the difference value of the depths of each two adjacent pixels in the measured pixels and the actual space distance corresponding to the unit pixel; and determining the size of the target part according to the sum of the lengths of the line segments.
In some possible implementations, the preset shapes include a first preset shape and a second preset shape, and the processing module 1104 may be specifically configured to, when determining the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel: determining a first size corresponding to the target part according to the depth of the measurement pixel corresponding to the first preset shape and the actual space distance corresponding to the unit pixel; determining a second size corresponding to the target part according to the depth of the measurement pixel corresponding to the second preset shape and the actual space distance corresponding to the unit pixel; the size of the target site is determined to be the sum of the first size and the second size.
It should be noted that, the device provided in this embodiment may be used to execute the above measurement method, and its implementation manner and technical effects are similar, and this embodiment is not repeated here.
It should be noted that, it should be understood that the division of the modules of the above apparatus is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. For example, the processing module may be a processing element that is set up separately, may be implemented in a chip of the above-mentioned apparatus, or may be stored in a memory of the above-mentioned apparatus in the form of program codes, and the functions of the above-mentioned processing module may be called and executed by a processing element of the above-mentioned apparatus. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element here may be an integrated circuit with signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules above may be one or more integrated circuits configured to implement the methods above, such as: one or more ASICs (Application Specific Integrated Circuit, specific integrated circuits), or one or more DSPs (Digital Signal Processor, digital signal processors), or one or more FPGAs (Field Programmable Gate Array, field programmable gate arrays), etc. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general purpose processor, such as a CPU or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a System-on-a-Chip (SOC).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer programs. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer program may be stored in or transmitted from one computer readable storage medium to another, for example, a website, computer, server, or data center via a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., solid State Disk (SSD)), among others.
The embodiment of the application also provides a computer readable storage medium, in which a computer program is stored which, when being executed by a processor, implements the measuring method according to any of the method embodiments above.
Embodiments of the present application also provide a computer program product comprising a computer program stored in a computer readable storage medium, from which at least one processor can read the computer program, the at least one processor executing the computer program implementing a measurement method according to any of the method embodiments described above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (17)

1. A display device, characterized by comprising:
a display;
a camera configured to collect an image containing depth information;
a controller respectively connected with the display and the camera, the controller configured to:
acquiring depth information, spatial position information and pixel position information of the image, wherein the spatial position information is used for representing the position information of pixels in the image in an actual space, and the pixel position information is used for representing the position information of the pixels in the image;
determining an actual spatial distance corresponding to a unit pixel on a target part of an object contained in the image according to pixel position information and spatial position information of the target pixel in the image, wherein the target pixel is a pixel on the target part;
Determining a measurement pixel corresponding to a measurement size in the image according to the depth information of the image and the pixel position information of the image, wherein the measurement size is the size corresponding to the target part;
determining the size of the target part according to the actual space distance corresponding to the measurement pixel and the unit pixel;
the target pixels include a first target pixel and a second target pixel; the controller is specifically configured to, when determining a measurement pixel corresponding to a measurement size in the image according to depth information of the image and pixel position information of the image:
determining a center pixel on the corresponding target part according to the pixel position information of the first target pixel and/or the pixel position information of the second target pixel;
determining a plurality of candidate pixels according to the pixel position information of the central pixel, wherein the candidate pixels are the same as the pixel coordinates of the central pixel in the direction of a third preset coordinate axis;
determining a depth difference value according to the depth of the candidate pixel and the depth of the center pixel;
determining a coordinate difference value according to pixel coordinates respectively corresponding to the candidate pixel and the fourth preset coordinate axis direction of the central pixel;
Determining an actual coordinate difference value as a product of the coordinate difference value and an actual spatial distance corresponding to the unit pixel;
and if the depth difference value is smaller than or equal to a first threshold value and the actual coordinate difference value is smaller than or equal to a second threshold value, determining the candidate pixel as a measurement pixel corresponding to the measurement size in the image.
2. The display device according to claim 1, wherein the controller is configured to, when determining the actual spatial distance corresponding to the unit pixel on the target portion of the object included in the image according to the pixel position information and the spatial position information of the target pixel in the image, specifically:
determining a first difference value according to pixel position information corresponding to the first target pixel and the second target pixel in the image, wherein the first difference value is used for representing the distance between the first target pixel and the second target pixel in the image;
determining a second difference value according to the spatial position information corresponding to the first target pixel and the second target pixel in the image, wherein the second difference value is used for representing the distance between the first target pixel and the second target pixel in an actual space;
And determining the actual space distance corresponding to the unit pixel on the target part of the object contained in the image as the ratio of the second difference value to the first difference value.
3. The display device according to claim 2, wherein the controller is configured to, when configured to determine the first difference value according to pixel position information corresponding to the first target pixel and the second target pixel in the image, specifically:
and determining a first difference value according to the pixel coordinate of the first target pixel in the first preset coordinate axis direction and the pixel coordinate of the second target pixel in the first preset coordinate axis direction, wherein the pixel coordinate is used for representing the pixel position information.
4. The display device according to claim 2, wherein the controller is configured to, when determining the second difference value according to the spatial position information corresponding to the first target pixel and the second target pixel in the image, specifically:
and determining a second difference value according to world coordinates of the first target pixel in the second preset coordinate axis direction and world coordinates of the second target pixel in the second preset coordinate axis direction, wherein the world coordinates are used for representing the space position information.
5. The display device according to any one of claims 1 to 4, wherein the controller is configured to, when configured to determine the size of the target portion based on the actual spatial distance corresponding to the measurement pixel and the unit pixel, specifically:
if the target part is not in the preset shape, determining that the size of the target part is the product of the number of the measurement pixels and the actual space distance corresponding to the unit pixels;
and if the target part is in the preset shape, determining the size of the target part according to the depth of the measurement pixel and the actual space distance corresponding to the unit pixel.
6. The display device according to claim 5, wherein the controller is configured to, when determining the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel, specifically:
determining the length of a line segment formed by each two adjacent pixels according to the difference value of the depths of each two adjacent pixels in the measurement pixel and the actual space distance corresponding to the unit pixel;
and determining the size of the target part according to the sum of the lengths of the line segments.
7. The display device according to claim 6, wherein the preset shapes include a first preset shape and a second preset shape, and the controller is specifically configured to, when determining the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel:
determining a first size corresponding to the target part according to the depth of the measurement pixel corresponding to the first preset shape and the actual space distance corresponding to the unit pixel;
determining a second size corresponding to the target part according to the depth of the measurement pixel corresponding to the second preset shape and the actual space distance corresponding to the unit pixel;
the size of the target site is determined to be the sum of the first size and the second size.
8. A measurement method, applied to a display device, comprising:
acquiring depth information, spatial position information and pixel position information of an image, wherein the spatial position information is used for representing the position information of pixels in the image in an actual space, and the pixel position information is used for representing the position information of the pixels in the image;
Determining an actual spatial distance corresponding to a unit pixel on a target part of an object contained in the image according to pixel position information and spatial position information of the target pixel in the image, wherein the target pixel is a pixel on the target part;
determining a measurement pixel corresponding to a measurement size in the image according to the depth information of the image and the pixel position information of the image, wherein the measurement size is the size corresponding to the target part;
determining the size of the target part according to the actual space distance corresponding to the measurement pixel and the unit pixel;
the target pixels comprise a first target pixel and a second target pixel, and the determining the corresponding measurement pixels of the measurement size in the image according to the depth information of the image and the pixel position information of the image comprises the following steps:
determining a center pixel on the corresponding target part according to the pixel position information of the first target pixel and/or the pixel position information of the second target pixel;
determining a plurality of candidate pixels according to the pixel position information of the central pixel, wherein the candidate pixels are the same as the pixel coordinates of the central pixel in the direction of a third preset coordinate axis;
Determining a depth difference value according to the depth of the candidate pixel and the depth of the center pixel;
determining a coordinate difference value according to pixel coordinates respectively corresponding to the candidate pixel and the fourth preset coordinate axis direction of the central pixel;
determining an actual coordinate difference value as a product of the coordinate difference value and an actual spatial distance corresponding to the unit pixel;
and if the depth difference value is smaller than or equal to a first threshold value and the actual coordinate difference value is smaller than or equal to a second threshold value, determining the candidate pixel as a measurement pixel corresponding to the measurement size in the image.
9. The method according to claim 8, wherein determining the actual spatial distance corresponding to the unit pixel on the target portion of the object included in the image according to the pixel position information and the spatial position information of the target pixel in the image comprises:
determining a first difference value according to pixel position information corresponding to the first target pixel and the second target pixel in the image, wherein the first difference value is used for representing the distance between the first target pixel and the second target pixel in the image;
determining a second difference value according to the spatial position information corresponding to the first target pixel and the second target pixel in the image, wherein the second difference value is used for representing the distance between the first target pixel and the second target pixel in an actual space;
And determining the actual space distance corresponding to the unit pixel on the target part of the object contained in the image as the ratio of the second difference value to the first difference value.
10. The method according to claim 9, wherein determining the first difference value according to pixel position information corresponding to the first target pixel and the second target pixel in the image, respectively, includes:
and determining a first difference value according to the pixel coordinate of the first target pixel in the first preset coordinate axis direction and the pixel coordinate of the second target pixel in the first preset coordinate axis direction, wherein the pixel coordinate is used for representing the pixel position information.
11. The method according to claim 9, wherein determining the second difference value according to the spatial position information corresponding to the first target pixel and the second target pixel in the image, respectively, includes:
and determining a second difference value according to world coordinates of the first target pixel in the second preset coordinate axis direction and world coordinates of the second target pixel in the second preset coordinate axis direction, wherein the world coordinates are used for representing the space position information.
12. The measurement method according to any one of claims 8 to 11, wherein the determining the size of the target portion based on the actual spatial distance corresponding to the measurement pixel and the unit pixel includes:
If the target part is not in the preset shape, determining that the size of the target part is the product of the number of the measurement pixels and the actual space distance corresponding to the unit pixels;
and if the target part is in the preset shape, determining the size of the target part according to the depth of the measurement pixel and the actual space distance corresponding to the unit pixel.
13. The method according to claim 12, wherein determining the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel includes:
determining the length of a line segment formed by each two adjacent pixels according to the difference value of the depths of each two adjacent pixels in the measurement pixel and the actual space distance corresponding to the unit pixel;
and determining the size of the target part according to the sum of the lengths of the line segments.
14. The measurement method according to claim 13, wherein the preset shape includes a first preset shape and a second preset shape, and the determining the size of the target portion according to the depth of the measurement pixel and the actual spatial distance corresponding to the unit pixel includes:
Determining a first size corresponding to the target part according to the depth of the measurement pixel corresponding to the first preset shape and the actual space distance corresponding to the unit pixel;
determining a second size corresponding to the target part according to the depth of the measurement pixel corresponding to the second preset shape and the actual space distance corresponding to the unit pixel;
the size of the target site is determined to be the sum of the first size and the second size.
15. A measurement apparatus for application to a display device, the measurement apparatus comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring depth information, spatial position information and pixel position information of an image, the spatial position information is used for representing the position information of pixels in the image in an actual space, and the pixel position information is used for representing the position information of the pixels in the image;
the first determining module is used for determining an actual space distance corresponding to a unit pixel on a target part of an object contained in the image according to pixel position information and space position information of the target pixel in the image, wherein the target pixel is a pixel on the target part;
The second determining module is used for determining a measuring pixel corresponding to a measuring size in the image according to the depth information of the image and the pixel position information of the image, wherein the measuring size is the size corresponding to the target part;
the processing module is used for determining the size of the target part according to the actual space distance corresponding to the measurement pixel and the unit pixel;
the target pixel comprises a first target pixel and a second target pixel, and the second determining module is specifically configured to:
determining a center pixel on the corresponding target part according to the pixel position information of the first target pixel and/or the pixel position information of the second target pixel;
determining a plurality of candidate pixels according to the pixel position information of the central pixel, wherein the candidate pixels are the same as the pixel coordinates of the central pixel in the direction of a third preset coordinate axis;
determining a depth difference value according to the depth of the candidate pixel and the depth of the center pixel;
determining a coordinate difference value according to pixel coordinates respectively corresponding to the candidate pixel and the fourth preset coordinate axis direction of the central pixel;
determining an actual coordinate difference value as a product of the coordinate difference value and an actual spatial distance corresponding to the unit pixel;
And if the depth difference value is smaller than or equal to a first threshold value and the actual coordinate difference value is smaller than or equal to a second threshold value, determining the candidate pixel as a measurement pixel corresponding to the measurement size in the image.
16. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein computer program instructions which, when executed, implement the measuring method according to any of claims 8 to 14.
17. A computer program product comprising a computer program which, when executed by a processor, implements the measuring method according to any one of claims 8 to 14.
CN202110860108.8A 2021-07-28 2021-07-28 Display equipment, measuring method and device Active CN113587812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110860108.8A CN113587812B (en) 2021-07-28 2021-07-28 Display equipment, measuring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110860108.8A CN113587812B (en) 2021-07-28 2021-07-28 Display equipment, measuring method and device

Publications (2)

Publication Number Publication Date
CN113587812A CN113587812A (en) 2021-11-02
CN113587812B true CN113587812B (en) 2023-10-27

Family

ID=78251370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110860108.8A Active CN113587812B (en) 2021-07-28 2021-07-28 Display equipment, measuring method and device

Country Status (1)

Country Link
CN (1) CN113587812B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101349540A (en) * 2007-07-18 2009-01-21 财团法人工业技术研究院 Image regulating method and image viewfinding apparatus
CN102042817A (en) * 2010-11-23 2011-05-04 中国科学院东北地理与农业生态研究所 Terrain roughness tester
CN102135417A (en) * 2010-12-26 2011-07-27 北京航空航天大学 Full-automatic three-dimension characteristic extracting method
CN103871051A (en) * 2014-02-19 2014-06-18 小米科技有限责任公司 Image processing method, device and electronic equipment
CN108240793A (en) * 2018-01-26 2018-07-03 广东美的智能机器人有限公司 Dimension of object measuring method, device and system
CN109255844A (en) * 2017-07-12 2019-01-22 通用电气公司 For using the Graphics overlay layer of the size of video inspection device measurement feature

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101349540A (en) * 2007-07-18 2009-01-21 财团法人工业技术研究院 Image regulating method and image viewfinding apparatus
CN102042817A (en) * 2010-11-23 2011-05-04 中国科学院东北地理与农业生态研究所 Terrain roughness tester
CN102135417A (en) * 2010-12-26 2011-07-27 北京航空航天大学 Full-automatic three-dimension characteristic extracting method
CN103871051A (en) * 2014-02-19 2014-06-18 小米科技有限责任公司 Image processing method, device and electronic equipment
CN109255844A (en) * 2017-07-12 2019-01-22 通用电气公司 For using the Graphics overlay layer of the size of video inspection device measurement feature
CN108240793A (en) * 2018-01-26 2018-07-03 广东美的智能机器人有限公司 Dimension of object measuring method, device and system

Also Published As

Publication number Publication date
CN113587812A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN112866773B (en) Display equipment and camera tracking method in multi-person scene
CN112672062B (en) Display device and portrait positioning method
CN114237419B (en) Display device and touch event identification method
CN114630053B (en) HDR image display method and display device
CN111949782A (en) Information recommendation method and service equipment
CN112073798B (en) Data transmission method and equipment
CN113747078B (en) Display device and focal length control method
CN114245090A (en) Image projection method, laser projection apparatus, and computer-readable storage medium
CN111939561B (en) Display device and interaction method
CN111984167B (en) Quick naming method and display device
CN114430492B (en) Display device, mobile terminal and picture synchronous scaling method
CN113473024A (en) Display device, holder camera and camera control method
CN113587812B (en) Display equipment, measuring method and device
WO2021218473A1 (en) Display method and display device
CN112399235B (en) Camera shooting effect enhancement method and display device of intelligent television
CN115082959A (en) Display device and image processing method
CN115842964A (en) Image acquisition device, display equipment, image processing method and device
CN115185392A (en) Display device, image processing method and device
CN111931692A (en) Display device and image recognition method
CN112367550A (en) Method for realizing multi-title dynamic display of media asset list and display equipment
CN111669662A (en) Display device, video call method and server
CN114449179B (en) Display device and image mixing method
CN113076031B (en) Display equipment, touch positioning method and device
CN113645502B (en) Method for dynamically adjusting control and display device
CN113807375B (en) Display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant