WO2019090904A1 - 确定距离的方法、装置、设备及存储介质 - Google Patents

确定距离的方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2019090904A1
WO2019090904A1 PCT/CN2017/116769 CN2017116769W WO2019090904A1 WO 2019090904 A1 WO2019090904 A1 WO 2019090904A1 CN 2017116769 W CN2017116769 W CN 2017116769W WO 2019090904 A1 WO2019090904 A1 WO 2019090904A1
Authority
WO
WIPO (PCT)
Prior art keywords
distance
user
image data
data
unknown
Prior art date
Application number
PCT/CN2017/116769
Other languages
English (en)
French (fr)
Inventor
陈伟东
Original Assignee
广州视源电子科技股份有限公司
广州视睿电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司, 广州视睿电子科技有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2019090904A1 publication Critical patent/WO2019090904A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present invention relates to the field of machine learning technology, and in particular, to a method, an apparatus, a device, and a storage medium for determining a distance.
  • Smart devices have become an indispensable item in people's lives. Users can realize various needs through various functions of smart devices, such as shopping needs, teaching needs, and painting needs.
  • the smart device can monitor the user's usage status in real time to better serve the user according to the usage state. For example, the smart device determines the distance between the user and the smart device in real time, and confirms whether the user is too close or too far from the smart device through the distance, so as to remind the user to ensure that the user uses the smart device within an appropriate distance range.
  • the distance between the user and the smart device is usually determined by installing a distance sensor in the smart device. In general, when measuring the distance by the distance sensor, it is usually only possible to measure the distance within a certain range and there is a measurement error, which may result in an inaccurate distance of the smart device, thereby making the smart device unable to serve the user better.
  • the embodiments of the present invention provide a method, a device, a device, and a storage medium for determining a distance, so as to solve the technical problem that the existing distance determining solution cannot accurately obtain the distance between the user and the smart device.
  • an embodiment of the present invention provides a method for determining a distance, including:
  • the embodiment of the present invention further provides an apparatus for determining a distance, including:
  • a data acquisition module configured to acquire image data of an unknown distance user collected by the camera
  • a distance recognition module configured to identify image data of the unknown distance user by using a distance model, and determine an actual distance between the unknown distance user and a device to which the camera belongs according to the recognition result, wherein the distance model is based on The image data of the distance user is determined by training.
  • an embodiment of the present invention further provides an apparatus, including:
  • One or more processors are One or more processors;
  • a memory for storing one or more programs
  • a camera for acquiring image data
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement a method of determining a distance as described in an embodiment of the present invention.
  • an embodiment of the present invention further provides a storage medium including computer executable instructions for performing a method for determining a distance according to an embodiment of the present invention when executed by a computer processor. .
  • the method, device, device and storage medium for determining a distance provided by the above are used to identify image data of an unknown distance user collected by a camera by using a distance model, wherein the distance model is determined based on training of image data of a known distance user, According to the recognition result, the technical solution of determining the actual distance between the unknown distance user and the device to which the camera belongs is realized, and the method of establishing the distance model by machine learning is realized, and the actual distance between the user and the device is obtained quickly and accurately.
  • FIG. 1 is a flowchart of a method for determining a distance according to Embodiment 1 of the present invention
  • FIG. 2 is a flowchart of a method for determining a distance according to Embodiment 2 of the present invention
  • FIG. 3 is a flowchart of a method for determining a distance according to Embodiment 3 of the present invention.
  • FIG. 4 is a schematic structural diagram of an apparatus for determining a distance according to Embodiment 4 of the present invention.
  • FIG. 5 is a schematic structural diagram of a device according to Embodiment 5 of the present invention.
  • FIG. 1 is a flowchart of a method for determining a distance according to Embodiment 1 of the present invention.
  • the method for determining the distance provided by this embodiment may be performed by a device for determining a distance, and the device for determining the distance may be implemented by software and/or manner and integrated in the device.
  • the device comprises at least one camera, and the camera may be a front camera or a rear camera.
  • the device may be an intelligent device such as an interactive smart tablet or a smart phone.
  • the method provided in this embodiment specifically includes:
  • the unknown distance user is the user who currently needs to know the actual distance from the device.
  • the image data is data determined based on an image containing the user, which is a typical parameter that determines the actual distance between the user and the device.
  • the image data may be pixel point data related to the user in the image, for example, the image data may be the ratio data of the pixel point where the user is located and all the pixel points of the image; or may be the first pixel included in the display area of the user's head.
  • the total number of points it can also be the total number of second pixels in the vertical direction between the user's eyes and the lower boundary of the image.
  • only one frame of image currently acquired by the camera may be acquired without taking a picture, and image data is determined according to the image.
  • image data is determined according to the image.
  • the advantage of this is that the current latest image data can be obtained in real time when the user moves, and the real-time performance of the image data is ensured.
  • This scheme is suitable for a scene with high real-time requirements for image data.
  • the advantage of this is that it only needs to determine the image data in one photo, which reduces the amount of data processing.
  • This scheme is suitable for scenes with low real-time requirements on image data.
  • the format of the image captured by the camera is not limited, such as the JPEG format.
  • the greater the resolution of the camera the more pixels are obtained by acquiring the image, and the more accurate the corresponding image data is.
  • cameras of different resolutions can be selected according to user requirements. For example, in this embodiment, a camera with a resolution of 3264 ⁇ 2448 is used, and an image of 3264 ⁇ 2448 pixels can be obtained.
  • S120 Identify, by using a distance model, image data of an unknown distance user, and determine an actual distance between the unknown distance user and the device to which the camera belongs according to the recognition result.
  • the distance model is determined based on the training of the image data of the known distance user, and the process of training the distance model is a process of machine learning.
  • the image data of the distance model training is the same type of data as the image data acquired in S110. Further, according to the distance model, the actual distance between the different unknown distance users and the device to which the camera belongs can be determined.
  • the specific content of the actual distance can be set according to the actual situation, and the priority is the user's eye.
  • the actual distance between the eye and the device to which the camera belongs may include the distance from the user's eyes to the camera, and may also include the vertical distance from the user's eyes to the horizontal plane at the bottom of the device.
  • different image data can determine different actual distances.
  • the actual distance is the vertical distance from the user's eyes to the horizontal plane of the bottom edge of the device
  • the required image data may be the total number of second pixels in the vertical direction between the user's eyes and the lower boundary of the image.
  • the image data employed is the total number of second pixels in the vertical direction between the eye of the user and the lower boundary of the image.
  • the image when the image is captured by the camera, if the image includes multiple unknown distance users, the image data of each unknown distance user is separately determined, and the image data of each unknown distance user is identified by using the distance model, according to The recognition result determines the actual distance between each unknown distance user and the device to which the camera belongs.
  • the device to which the camera belongs may be different from the device that performs the method. At this time, the device determines that the device is still the actual distance between the device to which the camera belongs and the user with unknown distance.
  • the technical solution provided by the embodiment is to identify the image data of the unknown distance user collected by the camera by using the distance model, wherein the distance model is determined based on the training of the image data of the known distance user, and the unknown distance is determined according to the recognition result.
  • the technical solution of the actual distance between the user and the device to which the camera belongs realizes the method of establishing a distance model through machine learning to quickly and accurately obtain the actual distance between the user and the device.
  • FIG. 2 is a flowchart of a method for determining a distance according to Embodiment 2 of the present invention. This embodiment is embodied on the basis of the above embodiment. Referring to FIG. 2, the method provided in this embodiment specifically includes:
  • an image containing a user of a known distance is acquired, and image data of a user of a known distance is determined by the image.
  • the actual distance between the user and the device to which the camera capturing the image belongs is known to be determined by manual measurement.
  • image data of a known distance user of a preset number (eg, 200, 500, etc.) is acquired to ensure the accuracy of the training model.
  • the acquired image containing the user of known distance covers the user image at each actual distance value in the normal situation.
  • the actual distance is the first distance from the user's eyes to the camera, and the device to which the camera belongs is an interactive smart tablet. If the specific value of the first distance is in the range of 30 cm-150 cm, the interval is one centimeter, and the different cm is included. Knowing the image of the user, preferably, the number of images per centimeter is not unique.
  • the image data of the corresponding known distance user can be obtained according to the type of the actual distance that is desired.
  • the image data of the known distance user whose actual distance is the same value is collected to obtain a plurality of image data sets, and each value and the corresponding image data set are trained as training data to obtain a distance model.
  • the training method is not limited in this embodiment.
  • the distance model is to output multiple types of actual distances
  • the image data of the known distance user corresponding to the same value in each type of actual distance may be separately collected, and each value and corresponding image data are respectively collected.
  • the set is trained as training data to obtain a distance model that outputs a plurality of types of actual distances.
  • S240 Identify, by using a distance model, image data of an unknown distance user, and determine an actual distance between the unknown distance user and the device to which the camera belongs according to the recognition result.
  • the image data is the total number of first pixels included in the head display area, and correspondingly, the actual distance is the first distance from the user's eyes to the camera.
  • the first distance may indicate a lateral distance between the user and the device. If the first distance is too small, the user is too close to the device, and vice versa, the user is too far from the device.
  • the first distance may include a first sub-distance of the user's left eye to the camera, a second sub-distance of the user's right eye to the camera, and may also include a first distance from the center point between the user's eyes to the camera.
  • the image data corresponding to the first distance is the total number of pixels in the image for displaying the head of the user.
  • the device acquires a certain number of the first pixel points of the user of the known first distance, and performs training to obtain a distance model, and when the total number of the first pixel points of the user of the unknown first distance is subsequently acquired, the total number of the first pixel points is obtained.
  • the first distance can be obtained.
  • the image data is the total number of second pixels in the vertical direction between the user's eyes and the lower boundary of the image.
  • the actual distance is the vertical second distance from the eye to the horizontal plane of the bottom edge of the device.
  • the vertical second distance may indicate the longitudinal distance between the user and the device. If the vertical second distance is too small, the device is in an upper position relative to the user, and the user needs to look up the device, and vice versa. Explain that the device is in a lower position relative to the user, and the user needs to look down at the device.
  • the bottom edge of the display may be used as the bottom edge of the device, or the bottom edge of the outer frame of the device may be used as the bottom edge of the device.
  • the vertical second distance may include a vertical first sub-distance of the user's left eye to the horizontal plane of the bottom edge of the device, a vertical second sub-distance of the user's right eye to the horizontal plane of the bottom edge of the device, and may also include between the user's eyes
  • the center point is the vertical second distance from the horizontal plane where the bottom edge of the device is located.
  • the total number of second pixels in the vertical direction between the user's eyes and the lower boundary of the image is perpendicular to the image in the image. In the direction of the lower boundary, the total number of pixels of the user's eye to the lower boundary of the image is displayed.
  • the pixel point of the user's eye may be the pixel point closest to the lower boundary of the image in the pixel of the two eyes, or the pixel farthest from the lower boundary of the image in the binocular pixel, or the farthest pixel and The middle pixel point between the nearest pixels.
  • the farthest and most recent mentioned above refers to the distance in the vertical direction of the lower boundary of the image.
  • the device acquires a certain number of the second pixel points of the user of the known vertical second distance, performs training to obtain the distance model, and then obtains the second total of the second pixel points of the unknown vertical second distance user, and then the second The total number of pixels as the input to the distance model gives the user a vertical second distance.
  • a distance model that simultaneously determines the first distance and the vertical second distance can be trained. At this time, if the total number of first pixels of the unknown distance user is used as the input of the distance model, the first distance can be obtained. By using the total number of second pixels of the unknown distance user as the input of the distance model, a vertical second distance can be obtained, and the total number of the first pixel points and the total number of the second pixel points of the unknown distance user can be used as the input of the distance model, and then At the same time, a first distance and a vertical second distance are obtained.
  • the distance model is obtained by training the image data of the known distance user and the corresponding actual distance, and the image data of the unknown distance user is identified by the distance model to obtain the actual distance of the unknown distance user.
  • the technical means realizes the fast and accurate distance between the user and the device, and the distance between the user's eyes and the camera can be obtained, and the vertical distance between the user's eyes and the horizontal plane of the bottom edge of the device can be obtained, so that the determination is made.
  • the actual distance is more diverse, which enhances the user experience.
  • FIG. 3 is a flowchart of a method for determining a distance according to Embodiment 3 of the present invention. This embodiment is embodied on the basis of the above embodiment. Referring to FIG. 3, the method provided in this embodiment specifically includes:
  • the basic data is the data that affects the determination result when determining the actual distance.
  • the basic data includes: face orientation data and/or age grouping.
  • the image data corresponding to the images of the two users captured by the camera may be different, therefore, the actual two users are determined according to the distance model.
  • the distance is different, it is possible to obtain different actual distances, that is, there is a deviation. Therefore, in order to obtain an accurate actual distance, it is necessary to consider the face orientation data when training the distance model.
  • the face orientation data refers to an angle at which the user's face faces in the image. It may include a first angle in the horizontal direction and a second angle in the vertical direction.
  • the face orientation data is horizontal 0° and vertical 0°, it means that the user's face is facing the camera when taking a picture.
  • the face orientation data is 10° to the left and 10° to the left, which means that the user's face is turned 10° to the left and the head is raised by 10° compared to the user's face facing the camera.
  • changes in the camera shooting angle and changes in the rotation angle of the device may cause the face to change toward the data.
  • the method for determining the face orientation data is not limited in this embodiment, such as determining the face orientation data after the image analysis process, and inputting the face orientation data after the manual measurement, and then determining the rotation angle of the whole machine, based on the rotation angle of the whole machine. The face is facing the data.
  • the age grouping may be set in advance, and the age of each known distance user is confirmed. Grouping to ensure that an accurate distance model is trained. Specifically, the grouping boundary can be set according to actual conditions, such as a group of 1-5 years old, a group of 6-12 years old, a group of 12-18 years old, and a group of 18 years old or older.
  • the manner of determining the age group is not limited. For example, after the image analysis processing, the age group is determined based on the user's facial features, and the age group is manually input.
  • it may also include determining only the actual distance of the user under a certain age group according to actual conditions. E.g, Only when the child uses the interactive smart tablet, determine the actual distance between the child and the interactive smart tablet, and then confirm whether the child is within the optimal distance, so as to avoid damage to the body or eyes caused by improper distance.
  • the image data of the known distance users under the same age grouping is grouped into the same image data set.
  • image data of the same distance user under the same face orientation data is grouped into the same image data set.
  • the specific manner of grouping is not limited.
  • the image data of the known distance user under the same group is collected into the same image data set, so that the image due to excessive face orientation data can be prevented. Too many data sets.
  • image data of a known distance user whose actual distance value is the same is grouped into the same sub data set.
  • each basic data has a corresponding distance model.
  • the basic data includes the face orientation data and the age grouping
  • the face orientation data and the image dataset grouped by the age group may be combined and trained, and each basic data combination has a corresponding distance model.
  • the basic data of the unknown distance user is determined in the same manner as the basic data of the known distance user.
  • the basic data of the known distance user includes the face orientation data and the age grouping
  • the basic data of the unknown distance user may include at least one of face orientation data and an age grouping.
  • the image data of the unknown distance user is identified by the distance model corresponding to 6-12 years old.
  • the technical solution provided by the embodiment is to collect image data of a known distance user from the same basic data into the same image data set, and train the image data set and the actual distance corresponding to the image data set to obtain the basic data.
  • the distance model determines the basic data of the unknown distance user, and identifies the image data of the unknown distance user through the distance model corresponding to the basic data, so as to obtain the technical distance of the unknown distance user, and realizes the determination of different facial orientations.
  • the actual distance between the user and the device and the actual distance between the user and the device at different ages make the actual distance more accurate and improve the user experience.
  • FIG. 4 is a schematic structural diagram of an apparatus for determining a distance according to Embodiment 4 of the present invention. Specifically, referring to FIG. 4, the apparatus specifically includes:
  • the data acquisition module 401 is configured to acquire image data of an unknown distance user collected by the camera; the distance identification module 402 is configured to identify image data of the unknown distance user by using the distance model, and determine, between the unknown distance user and the camera belonging device, according to the recognition result.
  • the technical solution provided by the embodiment is to identify the image data of the unknown distance user collected by the camera by using the distance model, wherein the distance model is determined based on the training of the image data of the known distance user, and the unknown distance is determined according to the recognition result. Actual distance between the user and the device to which the camera belongs The technical solution realizes the method of establishing a distance model through machine learning to quickly and accurately obtain the actual distance between the user and the device.
  • the method further includes: an image acquisition module, configured to acquire image data of a user with a known distance before acquiring image data of an unknown distance user acquired by the camera; and a model training module, configured to: The image data and the actual distance of the known distance user are trained as training data of the distance model to determine the distance model.
  • the model training module includes: a first data determining unit, configured to determine basic data of a known distance user, the basic data includes: face orientation data and/or age grouping; data collection unit, The image data of the known distance user having the same basic data is grouped into the same image data set; the data training unit is configured to use the actual distance of the image data set and the known distance user corresponding to the image data set as the corresponding basic data.
  • the distance model's training data is trained to determine the distance model.
  • the distance identification module 402 includes: a second data determining unit configured to determine basic data of the unknown distance user; and a data identifying unit configured to use the distance model corresponding to the basic data of the unknown distance user to image data of the unknown distance user
  • the identification unit is configured to determine an actual distance between the unknown distance user and the device to which the camera belongs according to the recognition result.
  • the image data is the total number of first pixels included in the head display area, and correspondingly, the actual distance is the first distance from the user's eyes to the camera.
  • the image data is the total number of second pixels in the vertical direction between the user's eyes and the lower boundary of the image.
  • the actual distance is the vertical second distance from the eye to the horizontal plane of the bottom edge of the device.
  • the device for determining the distance provided by the embodiment of the present invention may be used to perform the method for determining the distance provided by any of the foregoing embodiments, and has corresponding functions and beneficial effects.
  • FIG. 5 is a schematic structural diagram of a device according to Embodiment 5 of the present invention.
  • the device includes a processor 50, a memory 51, an input device 52, an output device 53, and a camera 54.
  • the number of processors 50 in the device may be one or more, and one processor 50 is used in FIG.
  • the number of cameras 54 in the device may be one or more, and one camera 54 is taken as an example in FIG. 5;
  • the processor 50, the memory 51, the input device 52, the output device 53, and the camera 54 in the device may be through a bus or other.
  • the mode is connected, and the connection by bus is taken as an example in FIG.
  • the processor 50 executes the program, the method for determining the distance in the embodiment of the present invention is implemented.
  • the memory 51 is used as a computer readable storage medium for storing a software program, a computer executable program, and a module, such as a program instruction/module corresponding to the method for determining a distance in the embodiment of the present invention (for example, in a device for determining a distance) Image acquisition module 401 and distance determination module 402).
  • the processor 50 executes various functional applications of the device and data processing by executing software programs, instructions, and modules stored in the memory 51, that is, implementing the above-described method of determining the distance.
  • the memory 51 may mainly include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to usage of the device, and the like. Further, the memory 51 may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, or other nonvolatile solid state storage device. In some examples, memory 51 may further include memory remotely located relative to processor 50, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Input device 52 can be used to receive input digital or character information and to generate key signal inputs related to user settings and function control of the device.
  • the output device 53 may include a display device such as a display screen.
  • Photo The image header 54 can be used to acquire image data.
  • the device provided by this embodiment may be used to perform the method provided by any of the foregoing embodiments, and has corresponding functions and beneficial effects.
  • Embodiment 6 of the present invention also provides a storage medium including computer executable instructions for performing a method of determining a distance when executed by a computer processor, the method comprising:
  • the image data of the unknown distance user is identified by the distance model, and the actual distance between the unknown distance user and the device to which the camera belongs is determined according to the recognition result, wherein the distance model is determined based on the training of the image data of the known distance user.
  • a storage medium containing computer executable instructions the computer executable instructions are not limited to the method operations as described above, and may also perform the method for determining the distance provided by any embodiment of the present invention. Related operations, and have the corresponding functions and benefits.
  • the present invention can be implemented by software and necessary general hardware, and can also be implemented by hardware, but in many cases, the former is a better implementation. .
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk of a computer. , read-only memory (ROM), random access memory (RAM), flash memory (FLASH), hard disk or optical disk, etc., including a number of instructions to make a computer device (can be a robot, Personal computer, server, or network device, etc.) performing various embodiments of the present invention The method described.
  • each unit and module included is only divided according to functional logic, but is not limited to the foregoing division, as long as the corresponding function can be implemented; in addition, each functional unit
  • the specific names are also for convenience of distinguishing from each other and are not intended to limit the scope of the present invention.

Abstract

一种确定距离的方法、装置、设备及存储介质。其具体包括:获取摄像头采集的未知距离用户的图像数据(110);利用距离模型对未知距离用户的图像数据进行识别,根据识别结果确定未知距离用户与摄像头所属设备间的实际距离(120)。其中,距离模型是基于已知距离用户的图像数据进行训练而确定。采用上述方案可以解决现有距离确定方案无法准确得到用户与智能设备间距离的技术问题。

Description

确定距离的方法、装置、设备及存储介质 技术领域
本发明涉及机器学习技术领域,尤其涉及一种确定距离的方法、装置、设备及存储介质。
背景技术
智能设备已经成为人们生活中不可缺少的用品。用户可以通过智能设备的各类功能实现各种需求,如购物需求、教学需求、绘画需求等。
在用户使用智能设备的过程中,智能设备可以实时监听用户的使用状态,以根据使用状态更好地为用户服务。例如,智能设备实时确定用户与智能设备的距离,并通过距离确认用户是否距离智能设备过近或者过远,进而对用户进行提醒,以保证用户在适当的距离范围内使用智能设备。现有技术中,通常通过在智能设备中安装距离传感器的方式确定用户与智能设备的距离。一般而言,通过距离传感器测量距离时,通常只能测量一定范围内的距离且存在测量误差,这样便会导致智能设备得到不准确的距离,进而使智能设备无法更好地为用户服务。
发明内容
有鉴于此,本发明实施例提供一种确定距离的方法、装置、设备及存储介质,以解决现有距离确定方案无法准确得到用户与智能设备间距离的技术问题。
第一方面,本发明实施例提供了一种确定距离的方法,包括:
获取摄像头采集的未知距离用户的图像数据;
利用距离模型对所述未知距离用户的图像数据进行识别,根据识别结果确定所述未知距离用户与所述摄像头所属设备间的实际距离,其中,所述距离模型是基于已知距离用户的图像数据进行训练而确定。
第二方面,本发明实施例还提供了一种确定距离的装置,包括:
数据获取模块,用于获取摄像头采集的未知距离用户的图像数据;
距离识别模块,用于利用距离模型对所述未知距离用户的图像数据进行识别,根据识别结果确定所述未知距离用户与所述摄像头所属设备间的实际距离,其中,所述距离模型是基于已知距离用户的图像数据进行训练而确定。
第三方面,本发明实施例还提供了一种设备,包括:
一个或多个处理器;
存储器,用于存储一个或多个程序;
摄像头,用于采集图像数据;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本发明实施例所述的确定距离的方法。
第四方面,本发明实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本发明实施例所述的确定距离的方法。
上述提供的确定距离的方法、装置、设备及存储介质,通过利用距离模型对摄像头采集的未知距离用户的图像数据进行识别,其中,距离模型是基于已知距离用户的图像数据训练而确定的,并根据识别结果确定未知距离用户与摄像头所属设备间的实际距离的技术方案,实现了通过机器学习建立距离模型的方式快速、准确的得到用户与设备间的实际距离。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:
图1为本发明实施例一提供的一种确定距离的方法的流程图;
图2为本发明实施例二提供的一种确定距离的方法的流程图;
图3为本发明实施例三提供的一种确定距离的方法的流程图;
图4为本发明实施例四提供的一种确定距离的装置的结构示意图;
图5为本发明实施例五提供的一种设备的结构示意图。
具体实施方式
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部内容。
实施例一
图1为本发明实施例一提供的一种确定距离的方法的流程图。本实施例提供的确定距离的方法可以由确定距离的装置执行,该确定距离的装置可以通过软件和/或的方式实现,并集成在设备中。其中,该设备包括至少一个摄像头,该摄像头可以是前置摄像头,也可以是后置摄像头。可选的,该设备可以是交互智能平板或智能手机等智能设备。
参考图1,本实施例提供的方法具体包括:
S110、获取摄像头采集的未知距离用户的图像数据。
示例性的,未知距离用户为当前需要明确与设备间实际距离的用户。图像数据为基于包含用户的图像而确定的数据,其是确定用户与设备间实际距离的典型参数。一般而言,图像数据可以是用户在图像中相关的像素点数据,如图像数据可以是用户所在的像素点与图像全部像素点的比例数据;还可以是用户头部显示区域包含的第一像素点总数;也可以是用户的眼睛与图像下边界间竖直方向的第二像素点总数。
进一步的,可以仅获取摄像头当前采集的一帧图像,而不进行拍摄,并根据该图像确定图像数据。这样做的好处是,可以在用户移动时,实时获取当前最新的图像数据,保证图像数据的实时性,此方案适用于对图像数据实时性要求高的场景。还可以先控制摄像头拍摄包含未知距离用户的照片,并通过分析该照片确定图像数据。这样做的好处是,仅需确定一个照片中的图像数据,减小了数据处理量,此方案适用于对图像数据实时性要求低的场景。其中,摄像头采集图像的格式不作限定,如JPEG格式。具体的,摄像头的分辨率越大,采集图像得到的像素点越多,对应的图像数据越准确。实际应用中,可以根据用户需求选择不同分辨率的摄像头,例如,本实施例中采用3264×2448分辨率的摄像头,此时可以得到3264×2448个像素点的图像。
S120、利用距离模型对未知距离用户的图像数据进行识别,根据识别结果确定未知距离用户与摄像头所属设备间的实际距离。
其中,距离模型是基于已知距离用户的图像数据进行训练而确定,对距离模型进行训练的过程为机器学习的过程。距离模型训练的图像数据与S110中获取的图像数据为相同类型的数据。进一步的,根据距离模型可以确定不同的未知距离用户与摄像头所属设备间的实际距离。
可选的,实际距离具体的内容可以根据实际情况进行设定,优先是用户眼 睛与摄像头所属设备间的实际距离,其可以包括用户的眼睛到摄像头的距离,也可以包括用户眼睛到设备底边所在水平面的竖直距离等。其中,不同的图像数据可以确定不同的实际距离。例如,实际距离为用户的眼睛到设备底边所在水平面的竖直距离,需要的图像数据可以是用户的眼睛与图像下边界间竖直方向的第二像素点总数,此时,距离模型训练时采用的图像数据为已知距离用户的眼睛与图像下边界间竖直方向的第二像素点总数。在实际应用中,可以将不同类型的实际距离以及确定该实际距离需要的图像数据进行归集,并针对每个数据集进行训练,便可以得到确定不同实际距离的距离模型。此时,仅需要输入未知距离用户不同类型的图像数据,便可以得到对应的实际距离。
可选的,摄像头采集图像时,如果该图像中包含了多个未知距离用户,则分别确定每个未知距离用户的图像数据,并利用距离模型对每个未知距离用户的图像数据进行识别,根据识别结果确定每个未知距离用户与摄像头所属设备间的实际距离。
需要说明的是,在实际应用中,摄像头所属设备可以与执行本方法的设备不同,此时,执行本方法设备确定的仍是摄像头所属设备与未知距离用户间的实际距离。
本实施例提供的技术方案,通过利用距离模型对摄像头采集的未知距离用户的图像数据进行识别,其中,距离模型是基于已知距离用户的图像数据训练而确定的,并根据识别结果确定未知距离用户与摄像头所属设备间的实际距离的技术方案,实现了通过机器学习建立距离模型的方式快速、准确的得到用户与设备间的实际距离。
实施例二
图2为本发明实施例二提供的一种确定距离的方法的流程图。本实施例是在上述实施例的基础上进行具体化。参考图2,本实施例提供的方法具体包括:
S210、获取已知距离用户的图像数据。
具体的,获取包含已知距离用户的图像,并通过图像确定已知距离用户的图像数据。其中,已知距离用户与拍摄图像的摄像头所属设备间的实际距离可以通过人工测量确定。进一步的,获取预设数量(如200、500等)的已知距离用户的图像数据,以保证训练模型的准确性。一般而言,获取的包含已知距离用户的图像涵盖了正常情景时各实际距离数值下的用户图像。例如,实际距离为用户的眼睛到摄像头的第一距离,摄像头所属设备为交互智能平板,假设第一距离的具体值在30cm-150cm范围内,则以一厘米为间隔,获取不同厘米下包含已知距离用户的图像,优选的,每厘米对应的图像数量不唯一。
具体的,由于不同类型的实际距离对应不同的图像数据,因此,可以根据想要得到的实际距离的类型获取对应的已知距离用户的图像数据。
S220、将已知距离用户的图像数据和已知距离用户的实际距离作为距离模型的训练数据进行训练,以确定距离模型。
可选的,将实际距离为同一数值的已知距离用户的图像数据进行归集,得到多个图像数据集,并将每个数值与对应的图像数据集作为训练数据进行训练,以得到距离模型。其中,训练方式本实施例不作限定。
进一步的,如果想要距离模型输出多种类型的实际距离,则可以分别将每类实际距离中同一数值对应的已知距离用户的图像数据进行归集,并将每个数值与对应的图像数据集作为训练数据进行训练,以得到输出多种类型实际距离的距离模型。
S230、获取摄像头采集的未知距离用户的图像数据。
S240、利用距离模型对未知距离用户的图像数据进行识别,根据识别结果确定未知距离用户与摄像头所属设备间的实际距离。
在上述实施例的基础上,图像数据为头部显示区域包含的第一像素点总数,相应的,实际距离为用户的眼睛到摄像头的第一距离。
其中,第一距离可以表明用户与设备间的横向距离,如果第一距离过小,则说明用户距离设备过近,反之,则说明用户距离设备过远。具体的,第一距离可以包括用户的左眼到摄像头的第一子距离,用户的右眼到摄像头的第二子距离,还可以包括用户双眼之间的中心点到摄像头的第一距离。进一步的,与第一距离对应的图像数据为图像中用于显示用户头部的像素点的总数。此时,设备获取一定数量的已知第一距离用户的第一像素点总数,并进行训练得到距离模型,后续获取到未知第一距离用户的第一像素点总数时,将第一像素点总数作为距离模型的输入,便可以得到第一距离。
在上述实施例的基础上,图像数据为用户的眼睛与图像下边界间竖直方向的第二像素点总数,相应的,实际距离为眼睛到设备底边所在水平面的竖直第二距离。
其中,竖直第二距离可以表明用户与设备间的纵向距离,如果竖直第二距离过小,则说明设备相对于用户而言,处于偏上的位置,用户需要抬头观看设备,反之,则说明设备相对于用户而言,处于偏下的位置,用户需要低头观看设备。具体的,可以将显示屏底边作为设备底边,也可以将设备外边框的底边作为设备底边。竖直第二距离可以包括用户的左眼到设备底边所在水平面的竖直第一子距离,用户的右眼到设备底边所在水平面的竖直第二子距离,还可以包括用户双眼之间的中心点到设备底边所在水平面的竖直第二距离。进一步的,用户的眼睛与图像下边界间竖直方向的第二像素点总数是指在图像中垂直于图 像下边界的方向上,显示用户眼睛的像素点到图像下边界的像素点总数。其中,显示用户眼睛的像素点可以是显示双眼的像素点中距离图像下边界最近的像素点,也可以是双眼像素点中距离图像下边界最远的像素点,还可以是最远像素点与最近像素点间的中间像素点。上述提到的最远及最近均指图像下边界垂直方向上的距离。此时,设备获取一定数量的已知竖直第二距离用户的第二像素点总数,进行训练得到距离模型,后续获取到未知竖直第二距离用户的第二像素点总数时,将第二像素点总数作为距离模型的输入,便可以得到该用户的竖直第二距离。
在实际应用中,可以训练得到同时确定出第一距离和竖直第二距离的距离模型,此时,如果将未知距离用户的第一像素点总数作为距离模型的输入,便可以得到第一距离,将未知距离用户的第二像素点总数作为距离模型的输入,便可以得到竖直第二距离,将未知距离用户的第一像素点总数和第二像素点总数作为距离模型的输入,便可以同时得到第一距离和竖直第二距离。
本实施例提供的技术方案,通过对已知距离用户的图像数据和对应的实际距离进行训练得到距离模型,并通过距离模型对未知距离用户的图像数据进行识别,以得到未知距离用户的实际距离的技术手段,实现了快速、准确的得到用户与设备间的实际距离,且可以得到用户的眼睛与摄像头间的距离,同时可以得到用户的眼睛与设备底边所在水平面的竖直距离,使得确定的实际距离更加多样化,提升了用户使用体验。
实施例三
图3为本发明实施例三提供的一种确定距离的方法的流程图。本实施例是在上述实施例的基础上进行具体化。参考图3,本实施例提供的方法具体包括:
S310、获取已知距离用户的图像数据。
S320、确定已知距离用户的基本数据。
其中,基本数据为确定实际距离时影响确定结果的数据。基本数据包括:面部朝向数据和/或所属年龄分组。
假设实际距离相同的两个用户,一个处于抬头状态,一个处于低头状态,此时,摄像头拍摄得到的两个用户的图像对应的图像数据可能不同,因此,在根据距离模型确定两个用户的实际距离时,可能得到不同的实际距离,即存在偏差。因此,为了得到准确的实际距离,在训练距离模型时,需要考虑到面部朝向数据。具体的,面部朝向数据是指图像中用户面部朝向的角度。其可以包括水平方向的第一角度和竖直方向的第二角度。例如,面部朝向数据为水平0°,竖直0°,则说明拍照时用户面部正对摄像头。面部朝向数据为左偏10°,向上10°,则说明拍照时与用户面部正对摄像头相比,用户面部向左转了10°,同时,头部抬高了10°。进一步,摄像头拍摄角度变化、设备整机旋转角度变化均会使面部朝向数据变化。具体的,面部朝向数据的确定方式本实施例不作限定,如对图像分析处理后确定面部朝向数据,又如人工测量后输入面部朝向数据,再如确定整机旋转角度,基于整机旋转角度推测出面部朝向数据。
一般而言,由于儿童与成人的头部大小不同,导致具有相同实际距离的儿童图像数据和成人图像数据可能不同,因此,可以预先设定年龄分组,并确认每个已知距离用户所属的年龄分组,以保证训练出准确的距离模型。具体的,分组界线可以根据实际情况设定,如1-5岁为一组,6-12岁为一组,12-18岁为一组,18岁以上为一组。所属年龄分组的确定方式不作限定,例如,对图像分析处理后基于用户面部特征确定所属年龄分组,又如人工输入所属年龄分组。此时,还可以包括根据实际情况仅确定某一年龄分组下的用户实际距离。例如, 仅在儿童使用交互智能平板时,确定儿童与交互智能平板的实际距离,进而确认儿童是否在最佳距离范围内,这样可以避免儿童由于距离不当对身体或眼睛造成的损害。
S330、将具有相同基本数据的已知距离用户的图像数据归集为同一图像数据集。
以所属年龄分组为例,将同一年龄分组下的已知距离用户的图像数据归集为同一图像数据集。以面部朝向数据为例,将同一面部朝向数据下的已知距离用户的图像数据归集为同一图像数据集。或者对面部朝向数据进行分组,分组的具体方式不作限定,进一步的,将同一分组下的已知距离用户的图像数据归集为同一图像数据集,这样可以防止由于面部朝向数据过多导致的图像数据集过多。可选的,在每个图像数据集中,将实际距离数值相同的已知距离用户的图像数据归集为同一子数据集。
S340、将图像数据集和与图像数据集对应的已知距离用户的实际距离作为对应基本数据的距离模型的训练数据进行训练,以确定距离模型。
此时,每个基本数据均有对应的距离模型。当基本数据包含面部朝向数据和所属年龄分组时,可以将面部朝向数据和所属年龄分组的图像数据集组合训练,每个基本数据组合均有对应的距离模型。
S350、获取摄像头采集的未知距离用户的图像数据。
S360、确定未知距离用户的基本数据。
其中,未知距离用户的基本数据的确定方式与已知距离用户的基本数据确定方式相同。可选的,已知距离用户的基本数据包括面部朝向数据和所属年龄分组时,未知距离用户的基本数据可以包括面部朝向数据和所属年龄分组中的至少一种。
S370、利用与未知距离用户的基本数据对应的距离模型对未知距离用户的图像数据进行识别,根据识别结果确定未知距离用户与摄像头所属设备间的实际距离。
例如,未知距离用户所属年龄分组为6-12岁,则通过与6-12岁对应的距离模型对未知距离用户的图像数据进行识别。
本实施例提供的技术方案,通过将同一基本数据下已知距离用户的图像数据归集为同一图像数据集,并对图像数据集和与图像数据集对应的实际距离进行训练得到该基本数据下的距离模型,确定未知距离用户的基本数据,并通过与基本数据对应的距离模型对未知距离用户的图像数据进行识别,以得到未知距离用户的实际距离的技术手段,实现了确定不同面部朝向的用户与设备间的实际距离以及不同年龄的用户与设备间的实际距离,使得实际距离更加精确,提升了用户的使用体验。
实施例四
图4为本发明实施例四提供的一种确定距离的装置的结构示意图。具体的,参考图4,该装置具体包括:
数据获取模块401,用于获取摄像头采集的未知距离用户的图像数据;距离识别模块402,用于利用距离模型对未知距离用户的图像数据进行识别,根据识别结果确定未知距离用户与摄像头所属设备间的实际距离,其中,距离模型是基于已知距离用户的图像数据进行训练而确定。
本实施例提供的技术方案,通过利用距离模型对摄像头采集的未知距离用户的图像数据进行识别,其中,距离模型是基于已知距离用户的图像数据训练而确定的,并根据识别结果确定未知距离用户与摄像头所属设备间的实际距离 的技术方案,实现了通过机器学习建立距离模型的方式快速、准确的得到用户与设备间的实际距离。
在上述实施例的基础上,还包括:图像获取模块,用于获取摄像头采集的未知距离用户的图像数据之前,获取已知距离用户的图像数据;模型训练模块,用于将已知距离用户的图像数据和已知距离用户的实际距离作为距离模型的训练数据进行训练,以确定距离模型。
在上述实施例的基础上,模型训练模块包括:第一数据确定单元,用于确定已知距离用户的基本数据,基本数据包括:面部朝向数据和/或所属年龄分组;数据归集单元,用于将具有相同基本数据的已知距离用户的图像数据归集为同一图像数据集;数据训练单元,用于将图像数据集和与图像数据集对应的已知距离用户的实际距离作为对应基本数据的距离模型的训练数据进行训练,以确定距离模型。
相应的,距离识别模块402包括:第二数据确定单元,用于确定未知距离用户的基本数据;数据识别单元,用于利用与未知距离用户的基本数据对应的距离模型对未知距离用户的图像数据进行识别;距离确定单元,用于根据识别结果确定未知距离用户与摄像头所属设备间的实际距离。
在上述实施例的基础上,图像数据为头部显示区域包含的第一像素点总数,相应的,实际距离为用户的眼睛到摄像头的第一距离。
在上述实施例的基础上,图像数据为用户的眼睛与图像下边界间竖直方向的第二像素点总数,相应的,实际距离为眼睛到设备底边所在水平面的竖直第二距离。
本发明实施例提供的确定距离的装置,可以用于执行上述任意实施例提供的确定距离的方法,具备相应的功能和有益效果。
实施例五
图5为本发明实施例五提供的一种设备的结构示意图。如图5所示,该设备包括处理器50、存储器51、输入装置52、输出装置53以及摄像头54;设备中处理器50的数量可以是一个或多个,图5中以一个处理器50为例;设备中摄像头54的数量可以是一个或多个,图5中以一个摄像头54为例;设备中的处理器50、存储器51、输入装置52、输出装置53以及摄像头54可以通过总线或其他方式连接,图5中以通过总线连接为例。其中,处理器50执行所述程序时实现如本发明实施例中的确定距离的方法。
存储器51作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本发明实施例中的确定距离的方法对应的程序指令/模块(例如,确定距离的装置中的图像获取模块401以及距离确定模块402)。处理器50通过运行存储在存储器51中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的确定距离的方法。
存储器51可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据设备的使用所创建的数据等。此外,存储器51可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器51可进一步包括相对于处理器50远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置52可用于接收输入的数字或字符信息,以及产生与设备的用户设置以及功能控制有关的键信号输入。输出装置53可包括显示屏等显示设备。摄 像头54可用于采集图像数据。
本实施例提供的设备可以用于执行上述任意实施例提供的方法,具备相应的功能和有益效果。
实施例六
本发明实施例六还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种确定距离的方法,该方法包括:
获取摄像头采集的未知距离用户的图像数据;
利用距离模型对未知距离用户的图像数据进行识别,根据识别结果确定未知距离用户与摄像头所属设备间的实际距离,其中,距离模型是基于已知距离用户的图像数据进行训练而确定。
当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本发明任意实施例所提供的确定距离的方法中的相关操作,且具备相应的功能和有益效果。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本发明可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是机器人,个人计算机,服务器,或者网络设备等)执行本发明各个实施例所 述的方法。
值得注意的是,上述装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (10)

  1. 一种确定距离的方法,其特征在于,包括:
    获取摄像头采集的未知距离用户的图像数据;
    利用距离模型对所述未知距离用户的图像数据进行识别,根据识别结果确定所述未知距离用户与所述摄像头所属设备间的实际距离,其中,所述距离模型是基于已知距离用户的图像数据进行训练而确定。
  2. 根据权利要求1所述的方法,其特征在于,所述获取摄像头采集的未知距离用户的图像数据之前,还包括:
    获取已知距离用户的图像数据;
    将所述已知距离用户的图像数据和所述已知距离用户的实际距离作为所述距离模型的训练数据进行训练,以确定所述距离模型。
  3. 根据权利要求2所述的方法,其特征在于,所述将所述已知距离用户的图像数据和所述已知距离用户的实际距离作为所述距离模型的训练数据进行训练包括:
    确定已知距离用户的基本数据,所述基本数据包括:面部朝向数据和/或所属年龄分组;
    将具有相同基本数据的所述已知距离用户的图像数据归集为同一图像数据集;
    将所述图像数据集和与所述图像数据集对应的已知距离用户的实际距离作为对应基本数据的距离模型的训练数据进行训练;
    所述利用距离模型对所述未知距离用户的图像数据进行识别包括:
    确定所述未知距离用户的基本数据;
    利用与所述未知距离用户的基本数据对应的距离模型对所述未知距离用户的图像数据进行识别。
  4. 根据权利要求2所述的方法,其特征在于,所述图像数据为头部显示区域包含的第一像素点总数,相应的,所述实际距离为用户的眼睛到所述摄像头的第一距离。
  5. 根据权利要求2所述的方法,其特征在于,所述图像数据为用户的眼睛与图像下边界间竖直方向的第二像素点总数,相应的,所述实际距离为所述眼睛到所述设备底边所在水平面的竖直第二距离。
  6. 一种确定距离的装置,其特征在于,包括:
    数据获取模块,用于获取摄像头采集的未知距离用户的图像数据;
    距离识别模块,用于利用距离模型对所述未知距离用户的图像数据进行识别,根据识别结果确定所述未知距离用户与所述摄像头所属设备间的实际距离,其中,所述距离模型是基于已知距离用户的图像数据进行训练而确定。
  7. 根据权利要求6所述的装置,其特征在于,还包括:
    图像获取模块,用于获取摄像头采集的未知距离用户的图像数据之前,获取已知距离用户的图像数据;
    模型训练模块,用于将所述已知距离用户的图像数据和所述已知距离用户的实际距离作为所述距离模型的训练数据进行训练,以确定所述距离模型。
  8. 根据权利要求7所述的装置,其特征在于,所述模型训练模块包括:
    第一数据确定单元,用于确定已知距离用户的基本数据,所述基本数据包括:面部朝向数据和/或所属年龄分组;
    数据归集单元,用于将具有相同基本数据的所述已知距离用户的图像数据归集为同一图像数据集;
    数据训练单元,用于将所述图像数据集和与所述图像数据集对应的已知距离用户的实际距离作为对应基本数据的距离模型的训练数据进行训练,以确定 所述距离模型;
    所述距离识别模块包括:
    第二数据确定单元,用于确定所述未知距离用户的基本数据;
    数据识别单元,用于利用与所述未知距离用户的基本数据对应的距离模型对所述未知距离用户的图像数据进行识别;
    距离确定单元,用于根据识别结果确定所述未知距离用户与所述摄像头所属设备间的实际距离。
  9. 一种设备,其特征在于,包括:
    一个或多个处理器;
    存储器,用于存储一个或多个程序;
    摄像头,用于采集图像数据;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-5中任一所述的确定距离的方法。
  10. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-5中任一所述的确定距离的方法。
PCT/CN2017/116769 2017-11-10 2017-12-17 确定距离的方法、装置、设备及存储介质 WO2019090904A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711104693.9A CN107909100A (zh) 2017-11-10 2017-11-10 确定距离的方法、装置、设备及存储介质
CN201711104693.9 2017-11-10

Publications (1)

Publication Number Publication Date
WO2019090904A1 true WO2019090904A1 (zh) 2019-05-16

Family

ID=61844755

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/116769 WO2019090904A1 (zh) 2017-11-10 2017-12-17 确定距离的方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN107909100A (zh)
WO (1) WO2019090904A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846336B (zh) * 2018-05-31 2022-03-15 深圳市易成自动驾驶技术有限公司 目标检测方法、装置及计算机可读存储介质
CN110466453B (zh) * 2019-08-28 2021-02-23 安徽江淮汽车集团股份有限公司 盲点监测系统车道宽度门限值调整方法
CN113408320A (zh) * 2020-03-16 2021-09-17 上海博泰悦臻网络技术服务有限公司 用于车辆避撞的方法、电子设备和计算机存储介质
CN111629145B (zh) * 2020-05-26 2022-04-12 广州视源电子科技股份有限公司 摄像头翻转控制方法、装置、显示终端及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110187703A1 (en) * 2010-01-29 2011-08-04 Kedar Anil Patwardhan Method and system for object tracking using appearance model
CN106022209A (zh) * 2016-04-29 2016-10-12 杭州华橙网络科技有限公司 一种基于人脸检测的距离估算和处理的方法及装置
CN106295533A (zh) * 2016-08-01 2017-01-04 厦门美图之家科技有限公司 一种自拍图像的优化方法、装置和拍摄终端
CN106959076A (zh) * 2017-02-20 2017-07-18 广州视源电子科技股份有限公司 基于摄像头的人像距离检测方法及系统
CN107277367A (zh) * 2017-07-27 2017-10-20 未来科技(襄阳)有限公司 拍照处理方法、装置、设备和存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102833486B (zh) * 2012-07-05 2016-09-21 深圳泰山体育科技股份有限公司 一种实时调节视频图像中人脸显示比例的方法及装置
CN104504374A (zh) * 2014-12-19 2015-04-08 合肥科飞视觉科技有限公司 一种自动监控人眼到屏幕距离的方法及系统
CN106096542B (zh) * 2016-06-08 2020-02-11 中国科学院上海高等研究院 基于距离预测信息的图像视频场景识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110187703A1 (en) * 2010-01-29 2011-08-04 Kedar Anil Patwardhan Method and system for object tracking using appearance model
CN106022209A (zh) * 2016-04-29 2016-10-12 杭州华橙网络科技有限公司 一种基于人脸检测的距离估算和处理的方法及装置
CN106295533A (zh) * 2016-08-01 2017-01-04 厦门美图之家科技有限公司 一种自拍图像的优化方法、装置和拍摄终端
CN106959076A (zh) * 2017-02-20 2017-07-18 广州视源电子科技股份有限公司 基于摄像头的人像距离检测方法及系统
CN107277367A (zh) * 2017-07-27 2017-10-20 未来科技(襄阳)有限公司 拍照处理方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN107909100A (zh) 2018-04-13

Similar Documents

Publication Publication Date Title
US11010967B2 (en) Three dimensional content generating apparatus and three dimensional content generating method thereof
WO2019090904A1 (zh) 确定距离的方法、装置、设备及存储介质
US11928800B2 (en) Image coordinate system transformation method and apparatus, device, and storage medium
CN108446698B (zh) 在图像中检测文本的方法、装置、介质及电子设备
WO2020063139A1 (zh) 脸部建模方法、装置、电子设备和计算机可读介质
US10681336B2 (en) Depth map generation
TW201911130A (zh) 一種翻拍影像識別方法及裝置
JP2018148367A (ja) 画像処理装置、画像処理システム、画像処理方法、及びプログラム
CN107113415A (zh) 用于多技术深度图获取和融合的方法和设备
KR20160119221A (ko) 이미징 디바이스에서의 사진 구도 및 포지션 안내
US9813693B1 (en) Accounting for perspective effects in images
CN109891189A (zh) 策划的摄影测量
CN112101124B (zh) 一种坐姿检测方法及装置
CN111008935B (zh) 一种人脸图像增强方法、装置、系统及存储介质
CN109117760A (zh) 图像处理方法、装置、电子设备和计算机可读介质
TW201222288A (en) Image retrieving system and method and computer program product thereof
WO2021136386A1 (zh) 数据处理方法、终端和服务器
CN107622497B (zh) 图像裁剪方法、装置、计算机可读存储介质和计算机设备
CN111307039A (zh) 一种物体长度识别方法、装置、终端设备和存储介质
CN107764233B (zh) 一种测量方法及装置
US20130135446A1 (en) Street view creating system and method thereof
CN113160231A (zh) 一种样本生成方法、样本生成装置及电子设备
WO2016145831A1 (zh) 图像的获取方法及装置
CN114930798A (zh) 拍摄对象切换方法及装置、图像处理方法及装置
EP3756164B1 (en) Methods of modeling a 3d object, and related devices and computer program products

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17931181

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17931181

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/10/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17931181

Country of ref document: EP

Kind code of ref document: A1