CN109522280B - Image file format, image file generating method, image file generating device and application - Google Patents

Image file format, image file generating method, image file generating device and application Download PDF

Info

Publication number
CN109522280B
CN109522280B CN201811353447.1A CN201811353447A CN109522280B CN 109522280 B CN109522280 B CN 109522280B CN 201811353447 A CN201811353447 A CN 201811353447A CN 109522280 B CN109522280 B CN 109522280B
Authority
CN
China
Prior art keywords
information
image
sensor
value
image file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811353447.1A
Other languages
Chinese (zh)
Other versions
CN109522280A (en
Inventor
吴东辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Huaqiang Electric Power Automation Engineering Co ltd
Original Assignee
哈尔滨华强电力自动化工程有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 哈尔滨华强电力自动化工程有限公司 filed Critical 哈尔滨华强电力自动化工程有限公司
Priority to CN201811353447.1A priority Critical patent/CN109522280B/en
Publication of CN109522280A publication Critical patent/CN109522280A/en
Application granted granted Critical
Publication of CN109522280B publication Critical patent/CN109522280B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention relates to the field of software, image information processing methods and programs, and in particular, to an image file format, an image file generation method, an image file generation device, and an image file application. The method is characterized in that: the system is provided with a camera device, at least one sensor is connected with the camera device, and information values acquired by the sensor are written into image files shot by the camera device in real time; collecting network massive images, screening the collected images according to set longitude and latitude values, extracting image information and setting a position coordinate of a focus point to form a conical bell mouth to cover the focus point; and forming a focus point image set, and performing time sequencing on the focus point image set to form an animation change image.

Description

Image file format, image file generating method, image file generating device and application
The patent application of the invention is divisional application. The application numbers of the original cases are: 2012103411923, respectively; the application date is: 2012-09-16; the invention name is: "an image file format and its generation method and device and application".
Technical Field
The present invention relates to the field of software, image information processing methods and programs, and in particular, to an image file format, an image file generation method, an image file generation device, and an image file application.
Background
The image file format is a format for recording and storing image information, digital images are stored, processed and transmitted, a certain image format is required, namely, pixels of the images are organized and stored according to a certain mode, and image data are stored into files to obtain image files. The image file format determines what type of information should be stored in the file, how the file is compatible with various applications, and how the file exchanges data with other files.
The current picture image formats are as follows: bmp (bitmap) format, TIFF (tag image File format) format (TIFF format can add author, copyright, remark, and custom information, and store multiple images), gif (graphic interactive format) format (an LZw compression format), JPEG format (Joint Photographic Experts Group), pdf (portable Document format) format, PNG format, and the like.
The current video image formats are: microsoft video: wmv, asf, asx; real Player: rm, rmvb; MPEG video: mpg, mpeg, mpe; mobile phone video: 3gp of a compound of formula (I); apple video: mov; sony video: mp4, m4 v; other common videos: avi, dat, mkv, flv, vob, etc. The video file is in a container packaging format, namely, different tracks are wrapped in a container, and the format of the used container is related to the expandability of the video file.
The present picture file summary includes a part of image attribute values: the device comprises a width sensor, a height sensor, a resolution ratio sensor, a chromaticity sensor, a frame number sensor, a device model sensor, a color representation sensor, a focal length sensor, an ISO speed sensor, a time sensor, a GPS longitude and latitude sensor, a sensor and a controller, wherein the width sensor, the height sensor, the resolution ratio sensor, the chromaticity sensor, the frame number sensor, the device model sensor, the color representation sensor, the focal length sensor, the ISO speed sensor, the time sensor, the GPS longitude and latitude sensor do not contain environment and state values during shooting, such as north and south direction angle values, horizontal inclination angle values and the like, and do not contain values of environment temperature, air pressure, gas information and the like during shooting, and if a sensor is adopted, required environment and state values are synchronously written into an image file, information conditions are provided for the post-processing of images.
Disclosure of Invention
The image comprises a picture image and a video image.
The picture image file generally includes a header structure (FILEHEADER), an information header structure (INFOHEADER), a bitmap color table (RGBQUAD), and bitmap pixel data, wherein the header structure and the information header structure define attribute features of the file, i.e., information items, and the bitmap color table and the bitmap pixel data are logical locations where the image is stored.
The video image file is composed of a file header, an index block and a data block, wherein the data block contains actual data streams, namely image and sound sequence data, the index block comprises a data block list and positions of the data block list and the positions of the data block list in the file so as to provide random access capability of data in the file, the file header comprises general information of the file, defines data formats, used compression algorithms and other parameters, and frame images of the video image file are generated by the data streams.
The invention aims to increase the information collected by a sensor in an image file, and the information can be used for post-processing the image.
A method for generating an image file format is characterized in that: writing the information value acquired by the sensor into the proper position of the picture image file; and writing the information value collected by the sensor into the proper position of the video image file.
A method for generating an image file format is characterized in that: writing the information value acquired by the sensor into the picture image file information item; and writing the information value acquired by the sensor into a file header, or an index block, or a data block, or a frame gap of the video image file.
An image file coding format, the packaging form comprises a video track and an audio track, and the image file coding format is characterized in that: also included in the package is a sensor track, i.e. a track where sensor information is recorded.
The sensor track is a vibration track, i.e. a vibration information track.
The picture image file information item includes: a picture image file abstract item, a picture image file attribute item, a picture image file header and a picture image file information header.
The information collected by the sensor is at least one or the combination of the following information: direction information, horizontal inclination angle information, vertical angle information, geographical position information, acceleration value information, light intensity information, noise value information, temperature information, air pressure information, humidity information, environmental gas information, environmental dust information and vibration information.
The geographical location information includes: relative position information relative to a coordinate origin, or latitude and longitude absolute geographical position information.
The geographical location information includes: GPS positioning information, mobile communication base station positioning information (LBS).
The positioning information form of the mobile communication base station is as follows: and at least two mobile communication base stations are in signal phase difference. Two base stations including at least one directional antenna can uniquely determine the address information of one mobile terminal, and base stations including at least three omnidirectional antennas can uniquely determine the address information of one mobile terminal.
The ambient gas information includes: oxygen content, carbon dioxide content, flue gas value, sulfur dioxide value, suspended particle value.
Further, the gas information is a taste code, i.e., a basic code of human to taste programming, so that the human taste can be recorded and reproduced, and at the taste reproduction side, the taste code in the image file is extracted, and the taste generation device is controlled by the taste code.
The device of the invention is: the system comprises a camera device, at least one sensor is connected with the camera device, and information values acquired by the sensor are written into image files shot by the camera device in real time.
The sensor is: the sensor comprises a magnetic sensor, a horizontal sensor, a gravity sensor, an acceleration sensor, a light intensity sensor, a distance sensor, an inclination angle sensor, a temperature sensor, a humidity sensor, an air pressure sensor, a noise value sensor, a gas sensor, a dust detection sensor, a gyroscope sensor and a vibration sensor.
The sensor is: gyroscope, thermometer, barometer, hygrometer, Hall device.
An image file format, characterized by: the image file at least comprises one of the following information: direction information, horizontal inclination angle information, acceleration value information, light intensity information, noise value information, temperature information, air pressure information, humidity information, environmental gas information, environmental dust information and vibration information.
An image file format, characterized by: the image file also contains geographic position information or address positioning information.
The image file format is characterized in that: the image file also comprises the hardware parameter information of the camera device, at least the focal length value.
The image file format is characterized in that: the image file includes a distance when the image file is captured by the image capturing device.
The application of the image file format is characterized in that: the image file comprises direction information, horizontal dip angle information and geographical position information, a vector image is formed in the post-processing of the image file, and pixel points of the vector image comprise three-dimensional coordinate information.
The application of the image file format is characterized in that: and processing the vector image to generate a multi-dimensional image.
An application of an image file format, comprising the steps of: (1) acquiring at least two image files shot at different positions of the same object, wherein the image files comprise direction information, horizontal inclination angle information and geographical position information during shooting; (2) transmitting the at least two image files to an information processing center, wherein the information processing center extracts direction information, horizontal inclination angle information and geographical position information in the image files; (3) the information processing center picks up image pixels in the two image files, checks the consistency of the two images, picks up pixel points of target points in the object respectively corresponding to the two images, and determines the correspondence of the two pixel points, or manually determines the correspondence of the two pixel points, namely manually calibrating key target points of the object; (4) calculating the three-dimensional coordinates of the target point in the object according to the direction information, the horizontal inclination angle information and the geographical position information extracted in the step (2) and the corresponding pixel points in the two images of the target point in the object picked up in the step (3) through a trigonometric function relationship; (5) calculating all points in the object to obtain three-dimensional coordinates of all points of the object, or calculating a key object point of the object to obtain three-dimensional coordinates of the key object point of the object, and then calculating by simulation calculation or interpolation to obtain three-dimensional coordinates of other points of the object; (5) and generating a three-dimensional image or a three-dimensional contour image of the object according to the three-dimensional coordinates of the object.
The application of the image file format is characterized in that: and establishing an information processing center, collecting or receiving an image file containing sensor information, and extracting sensor information values contained in the image file by the information processing center to form a geographical or time distribution map of a certain sensor information value.
The application of the image file format is characterized in that: the information processing center is a website which collects or receives the image file containing the sensor information uploaded from the network terminal.
The application of the image file format is characterized in that: the sensor information is one or a combination of the following: acceleration value information, noise value information, temperature information, air pressure information, humidity information, environmental gas information, dust value information, and vibration information.
The invention has the beneficial effects that: the image file comprises information which is synchronously written in during shooting and is acquired by a sensor, and the information can be used for post-processing the image file. If the two image files contain GPS longitude and latitude values, direction values and horizontal inclination angle values, the three-dimensional coordinates of each point of the object in the image can be obtained through mathematical trigonometric function relation calculation, and then the three-dimensional coordinates can be used as parameters of a vector diagram to construct a vector multi-dimensional diagram; for another example: at present, many websites provide mobile phone client-side image uploading services, such as a QQ space and a social network space, at present, mobile phones are provided with a GPS, a gyroscope, a temperature sensor, a horizontal sensor, a light intensity sensor, a direction sensor, a gravity sensor and an acceleration sensor, and are generally used for game control, if the sensor information is synchronously written into image files when images are shot, the image file information contains a timestamp, so that the images are processed and analyzed on the website to obtain temperature geographical distribution, temperature change graphs of different regions along with time and the like, and discrete and untrusted data are removed when the images are processed and analyzed; if the mobile phone is additionally provided with the air pressure sensor, an air pressure geographic distribution diagram or a time-varying diagram can be obtained; if a gas sensor is added to the mobile phone, a pollution geographic distribution map or a time-varying map can be obtained; if a dust detection sensor is added to the mobile phone, a geographical distribution map or a time-varying map of the concentration of the inhalable particles (PM 2.5) can be obtained; if a noise sensor is added to the mobile phone, a noise geographical distribution graph or a time-varying graph can be obtained; if a humidity sensor is added to the mobile phone, a humidity geographical distribution diagram or a time-varying diagram can be obtained; if the direction information, the horizontal inclination angle information and the GPS longitude and latitude information of a plurality of images in a certain range at the same time are extracted, a multi-dimensional vector image can be synthesized through mathematical trigonometric calculation. In addition, by utilizing the network mass images and by condition filtering, a solution clue can be provided for the public security system. The motion state during shooting can be reproduced by utilizing the video images containing the motion information or the vibration information, and the dynamic audio and video playing is realized.
Drawings
FIG. 1 is a list of image file attributes according to the present invention.
Fig. 2 is a schematic diagram of the connection of the device of the present invention.
FIG. 3 is a schematic diagram of two known coordinate points calculating the coordinates of a third point by azimuth and horizontal tilt.
FIG. 4 is a schematic diagram of the present invention for calculating three-dimensional coordinates of image points from two intersecting images taken at different address locations.
Fig. 5 is a schematic diagram illustrating the principle of obtaining the geographical location information of other cameras by a certain camera according to the present invention.
FIG. 6 shows a step of extracting information feature values from an image of a network to generate a feature value map according to the present invention.
FIG. 7 shows a step of extracting temperature information from an image of a network to generate a temperature variation graph according to the present invention.
FIG. 8 is a step of generating a temperature distribution map by extracting temperature information from an image of a network according to the present invention.
Fig. 9 shows a step of extracting spatial position information of an image from a network to generate a vector map according to the present invention.
FIG. 10 is a method for forming a certain point of interest image set by collecting a network of mass images according to the present invention.
Fig. 11 is an embodiment of the connection between the camera module and the client software according to the present invention.
Fig. 12 is a flowchart of a self-checking procedure of the photographing module according to the present invention.
FIG. 13 is a flow chart of the photographing module of the present invention.
Fig. 14 is a schematic diagram of the posture of the photographing device and the motion-sensing reproduction system according to the present invention.
Fig. 15 is a schematic diagram of a motion-based reproduction system of the present invention.
FIG. 16 is a schematic diagram of a motion-sensing reproduction system for obtaining acceleration values by attitude function differential calculation according to the present invention.
Fig. 17 is a flowchart of the animation reproduction system control software of the present invention.
Fig. 18 shows the principle of calculation of the acceleration pulse time T.
Fig. 19 is an embodiment of the motion-sensing reproduction system of the present invention using a human body attachment.
Fig. 20 is a human body attachment used in the motion-sensing reproduction system of the present invention.
Fig. 21 is a schematic diagram of a device for simulating motion by electric pulse signals for the motion reproduction system of the present invention.
Fig. 22 is a diagram of the electrode arrangement of the device for simulating motion using electric pulse signals for the motion-sensing reproduction system of the present invention.
Fig. 23 is an embodiment of the hand-held device of the present invention for realizing simulated motion.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a list of image file attributes according to the present invention. The attribute list includes existing image attributes, such as: width, height, horizontal resolution, vertical resolution, bit depth, number of frames, device, focal length, color representation, ISO speed, time, address, etc. Adding the attributes in the attribute list: 101 direction, 102 horizontal tilt angle, 103 temperature, 104 humidity, 105 air pressure, 106 noise value, 107 ambient air, 108 distance, 109 acceleration, 110 vibration. 101 the direction guide north direction angle value, such as S20E for south east 20 degrees, is implemented by a magnetic sensor, such as a compass, or by a gyroscope sensor; the 102 horizontal inclination angle refers to an included angle between the axis of the lens and the horizontal plane when an image is shot, and is realized through an inclination angle sensor; the 103 temperature refers to the ambient temperature value at the time of image shooting and is realized by a temperature sensor; 104 humidity refers to the current environmental humidity value when the image is shot and is realized by a humidity sensor; 105 air pressure refers to the ambient air pressure value at the time of image shooting and is realized through an air pressure sensor; the 106 noise value refers to the noise value at the time of image shooting and is realized by a noise sensor; 107 ambient gas refers to the current ambient gas concentration value when an image is taken, and is realized through a corresponding gas sensor, such as sulfur dioxide concentration, inhalable particulate matter (PM 2.5) concentration and the like; the distance value of 108 when the distance refers to shooting images can be realized by acquiring parameters of a focusing system; 109, acceleration refers to an acceleration value, the acceleration is a vector value, and the vector value has a magnitude and a direction and is marked as a = xi + yj + zk; the 110 vibration refers to the environmental vibration information when the image is taken, and is represented as (a, f) = f (t), a is the amplitude, f is the vibration frequency, that is, the amplitude and the frequency are functions of time t, and since human vision and hearing are the main senses, in addition, vibration is a common sense, such as sitting on a car, and the like, recording the vibration information is an important index for realizing dynamic reproduction.
Fig. 2 is a schematic diagram of the connection of the device of the present invention. The 201 sensor, 202 camera unit, 203 information processing unit, 204 contains sensor information image file. The 201 sensor refers to the following sensors: the sensor is used for collecting direction information, horizontal inclination angle information, geographical position information, acceleration value information, light intensity information, noise value information, temperature information, air pressure information, humidity information, environmental gas information, environmental dust information and the like, and transmitting the collected information values to the 203 information processing unit in real time; the 202 camera shooting unit is responsible for shooting images, the 201 sensor is responsible for collecting sensor information values at the time, image files shot by the 202 camera shooting unit and the sensor information values collected by the 201 sensor are simultaneously transmitted to the 203 information processing unit, and the 203 information processing unit assigns the sensor information values to the image files to generate image files containing the sensor information values.
The images taken by the camera unit comprise picture images and video images for which the sensor information values can be assigned to frames, or frame gaps, or key frames.
FIG. 3 is a schematic diagram of two known coordinate points calculating the coordinates of a third point by azimuth and horizontal tilt. For convenience of calculation, assuming that the points a and B are on the same horizontal plane, α 1 is the horizontal inclination angle of the point a, β 1 is the direction angle of the point a, α 2 is the horizontal inclination angle of the point B, β 2 is the direction angle of the point B, the coordinates of the point a are (x1, y1, z1), the coordinates of the point B are (x 2, y2, z 2), and the coordinates of the point P (x 3, y3, z 3) can be calculated from trigonometric functions.
FIG. 4 is a schematic diagram of the present invention for calculating three-dimensional coordinates of image points from two intersecting images taken at different address locations. 401 is the pixel of the same object point in the images shot by a certain point P on the shot object at two different address positions, 402 is the central point of the image shot by the A address, the coordinates of 402 are (x1, y1, z1), the direction angle of the central point of 402, the horizontal inclination angle of which is the center of alpha 1,402, is beta 1, and the central point 402 of the image shot by the A address is recorded as a parameter value (x1, y1, z1, alpha 1, beta 1); similarly, the center point 403 of the image captured at the B address is recorded as the parameter value (x 2, y2, z2, α 2, β 2). The horizontal inclination angle α 11 and the direction angle β 11 of the pixel P point in the image taken at the a address can be calculated by α 1 and β 1, and similarly, the horizontal inclination angle α 21 and the direction angle β 21 of the pixel P point in the image taken at the B address can be calculated by α 2 and β 2, and the three-dimensional coordinates (x 3, y3, z 3) of the pixel P point can be calculated from trigonometric functions.
At least two images, and the coordinates (x 3, y3, z 3) of the point P of the pixel can be obtained through trigonometric function relation and a multivariate equation.
In one embodiment, at least two image capturing devices are fixed, have fixed addresses and fixed horizontal tilt angles and fixed direction angles, are recorded as parameter values (xd 1, yd1, zd1, α d1, β d 1) and (xd 2, yd2, zd2, α d2, β d 2), and the three-dimensional coordinate position of any pixel point of the captured image can be calculated through the two parameter values.
An application of an image file format, comprising the steps of: (1) acquiring at least two image files shot at different positions of the same object, wherein the image files comprise direction information, horizontal inclination angle information and geographical position information during shooting; (2) transmitting the at least two image files to an information processing center, wherein the information processing center extracts direction information, horizontal inclination angle information and geographical position information in the image files; (3) the information processing center picks up image pixels in the two image files, checks the consistency of the two images, picks up pixel points of target points in the object respectively corresponding to the two images, and determines the correspondence of the two pixel points, or manually determines the correspondence of the two pixel points, namely manually calibrating key target points of the object; (4) calculating the three-dimensional coordinates of the target point in the object according to the direction information, the horizontal inclination angle information and the geographical position information extracted in the step (2) and the corresponding pixel points in the two images of the target point in the object picked up in the step (3) through a trigonometric function relationship; (5) calculating all points in the object to obtain three-dimensional coordinates of all points of the object, or calculating a key object point of the object to obtain three-dimensional coordinates of the key object point of the object, and then calculating by simulation calculation or interpolation to obtain three-dimensional coordinates of other points of the object; (5) and generating a three-dimensional image or a three-dimensional contour image of the object according to the three-dimensional coordinates of the object.
And the target point in the object is a characteristic point of the object, at least two image pixels shot by the object are compared and identified, and the pixel set with the highest similarity is the characteristic point.
The embodiment can be applied to large-scene three-dimensional image production, such as three-dimensional street view production, real-time battle three-dimensional topographic map generation and real-time large-scale target three-dimensional image generation.
A scheme for generating a combat three-dimensional topographic map or an enemy target three-dimensional image in real time: (1) shooting images containing address information, direction information and horizontal dip angle information by using cameras carried by airplanes, combat vehicles and individual soldiers; (2) the image is transmitted to an information processing center; (3) the information processing center generates a three-dimensional topographic map or a three-dimensional image or a three-dimensional outline image of an enemy target object in real time.
The technology of reconstructing a three-dimensional model of an object by optical imaging and trigonometric function relationship calculation has been reported in public, such as chinese patent publications 03128221.0, 02158343.9.
Fig. 5 is a schematic diagram illustrating the principle of obtaining the geographical location information of other cameras by a certain camera according to the present invention. 501 is a picture camera, 502 is another picture camera, 501, 502 are used as captured images, 503 is an address camera, 503 is used for determining the geographical position of 501 and 502, and the geographical position may be an absolute position such as latitude and longitude information, or a relative position such as relative position coordinates with respect to a certain reference point. The method comprises the following steps: (1) the image camera shoots images, and the address camera determines the geographic position of the image camera; (2) the image camera transmits the image file to the information processing unit, and the address camera transmits the geographic position information to the information processing unit; (3) an image file containing geographical location information is generated by an information processing unit.
One application of this embodiment is to capture street view stereo pictures or three-dimensional pictures, where the address camera is located at a high altitude where the aerial view can be globally viewed, where at least two image cameras capture images, or where one image camera captures at least two sets of images at different geographic locations, and then generate stereo pictures or three-dimensional pictures based on the geographic location information, orientation angle information, and horizontal tilt angle information of the images.
FIG. 6 shows a step of extracting information feature values from an image of a network to generate a feature value map according to the present invention. 601, images from a network are actively collected or a website is provided for a network terminal to upload so as to form an image set or an image library; 602, selecting a feature value, wherein the feature value refers to a sensor information value contained in an image, such as: temperature value, humidity value, noise value, etc. which are used as characteristic values to be processed once selected, and of course, the characteristic values can be selected more; 603, filtering the characteristic value, namely screening or sequencing the images in the image library according to the set characteristic value; 604 extracting image characteristic information values, i.e. extracting sensor information values of interest in the image file; 605 removing the unreliable characteristic value data group, namely removing the data group with overlarge discreteness, and adopting a Gaussian distribution method or an average method; 606 generate a feature value map, i.e., a desired feature value profile or time variation map.
601 images from the network may be replaced by a particular image source, such as a particular image collection, or from a particular camera.
FIG. 7 shows a step of extracting temperature information from an image of a network to generate a temperature variation graph according to the present invention. 701, images from a network are actively collected or a website is provided for a network terminal to upload so as to form an image set or an image library; 702 selecting an address range, namely setting an address range and setting an address range value by using longitude and latitude values; 703 address range filtering, namely screening or sequencing the images in the image library according to the set longitude and latitude values; 704 extracting image temperature information value and time value, namely extracting temperature value and shooting time value in image file; 705 removing the unreliable temperature value data group, namely removing the data group with overlarge discreteness, wherein a Gaussian distribution method or an average method can be adopted; 706 generates a temperature-time change map of a certain area, namely a temperature-time change map of a certain area.
Similarly, the air pressure information, the humidity information, the noise value information and the environmental gas information of the image can be extracted to respectively form a change chart of the corresponding value along with the change of time.
Images 701 from the network may be replaced by a particular image source, such as a particular image collection, or from a particular camera.
FIG. 8 is a step of generating a temperature distribution map by extracting temperature information from an image of a network according to the present invention. 801, images from a network are actively collected or a website is provided for a network terminal to upload, so as to form an image set or an image library; 802, selecting time period, namely setting a time range; 803 time filtering, namely filtering the images in the image library according to set time; 804 extracting the temperature information value and the address information value of the image, namely extracting the temperature value and the address coordinate value of the image file; 805 removing the unreliable temperature data sets, i.e. removing the data sets with too large discreteness, a gaussian distribution method or an averaging method can be adopted; 806 generate a temperature zone profile, i.e., a temperature zone profile at a time.
Similarly, the air pressure information, the humidity information, the noise value information and the environmental gas information of the image can be extracted to form a certain-time region distribution map of the corresponding values respectively.
The image from the network 801 may be replaced by a particular image source, such as a particular image collection, or from a particular camera device.
Fig. 9 shows a step of extracting spatial position information of an image from a network to generate a vector map according to the present invention. 901 images from the network are actively collected or websites are provided for the network terminal to upload so as to form an image set or an image library; 902, selecting an address range, namely setting an address range, and setting an address range value by using a longitude and latitude value or setting an address range value by using a physical position coordinate value; 903 address range filtering, namely screening or sequencing the images in the image library according to the set longitude and latitude values or coordinate values; 904, time period selection, i.e. setting a time range, the time value can be default, i.e. a certain time, or a period of time, or any time; 905 time period filtering, namely filtering the images in the image library according to a set time range; 906 extracting image information, namely extracting an address information value, a direction angle value and a horizontal inclination angle value in the image file, and extracting a focal length value and a distance value as reference values; 907 establishing an image vector value, namely calculating the three-dimensional coordinates of each pixel or key pixel of the image according to the image information extracted in the step 906 by a trigonometric function relationship to form an image pixel point vector value, wherein the key pixel refers to a pixel capable of representing the position of the key point of the image; 908 vector value calculation processing, namely processing the vector value of each pixel point of the image; 909 image fusion processing, i.e., fusing or replacing image pixels to synthesize a new image; 910 generate a plan view or a multi-dimensional view, i.e. a composite image is either a plan view, or a perspective view, or a multi-dimensional view from multiple perspectives.
Images 901 from the network may be replaced by a particular image source, such as a particular image collection, or from a particular camera device.
The embodiment can be used for three-dimensional image production.
The embodiment can be used for three-dimensional street view production.
Image fusion, namely processing at least two images with smaller scenes to generate an image with larger scenes and pixels containing the fused images; or processing images of different focus points to generate a clear image with all pixels well focused.
The technology of combining a plurality of photos into one photo has been reported in public, such as Chinese patent publications 200910104632.1, 200510125371.3, 201010527509.3 and 200910147247.5.
FIG. 10 is a method for forming a certain point of interest image set by collecting a network of mass images according to the present invention. 1001 collecting network mass images, namely actively collecting images in a network; 1002 selecting an address range, namely setting an address range, and setting an address range value by using a longitude and latitude value; 1003 address range filtering, namely screening the collected images according to the set longitude and latitude values; 1004 extracting image information, namely extracting an address information value, a direction angle value and a horizontal inclination angle value in the image file, and extracting a focal length value and a distance value as reference values; 1005 attention point selection, namely setting attention point position coordinates, after the attention point position coordinates are determined, forming a group of cone visual direction shooting covering form graphs by selecting direction angle values and horizontal inclination angle values, wherein a cone horn mouth covers the attention point or does not cover the attention point; 1006 excludes the focus point filtering, that is, filtering a set of cone vision direction shooting coverage shape graphs, and filtering out images of which the focus point does not fall in the cone vision coverage range, that is, filtering out images of which the focus point does not fall in the cone bell mouth coverage range; 1007 focus point image set, namely forming a focus point image set; the animated change images are formed 1008 by time-ordering, i.e., by time-ordering the set of point-of-interest images.
1001 the collection of networked mass images may be replaced by a particular image source, such as a particular image collection, or from a particular camera.
The embodiment can be applied to police solution, people are highly likely to take pictures of a certain case at the incident place, and solution clues can be provided through collection and analysis of photo images afterwards.
Fig. 11 is an embodiment of the connection between the camera module and the client software according to the present invention. The apparatus of the invention may be: cameras, video cameras, mobile phones, head-mounted terminals, computers, etc., the device being equipped with client software or a shooting module. 1101 the internet server, 1102 the client software, 1103 the shooting module, that is, the shooting module software, may be embedded software, may also be the shooting module program, 1103 the shooting module includes: 1104 image pickup acquisition unit, 1105 sensor information acquisition unit, 1106 address positioning information acquisition unit, 1107 information processing unit. The client software 1102 is connected with the shooting module 1103, the client software 1102 exchanges data with the internet server 1101 and uploads or downloads data, the shooting module 1103 is responsible for transmitting the image file processed by the information processing unit 1107 to the client software 1102, and the client software 1102 is responsible for uploading the image file to the internet server 1101. The image capture acquisition unit 1104 is responsible for acquiring a captured image, and the sensor information acquisition unit 1105 is responsible for acquiring sensor information such as: the information processing unit 1107 is responsible for acquiring the sensor information, the address positioning information acquisition unit 1106 is responsible for acquiring the address positioning information, such as the GPS information and the base station information, and the information processing unit 1107 is responsible for acquiring the sensor information, and the address positioning information acquisition unit 1106 is responsible for acquiring the address positioning information and assigning the address positioning information to the image file acquired by the camera acquisition unit 1104.
Of course, the shooting module can also directly upload the image file processed by the information processing unit to the internet server.
According to the embodiment, a plurality of clients can be used for uploading image files, and the street view three-dimensional image is established through calculation processing of a website.
Another application of this embodiment is that the motion during shooting can be recorded and transmitted to other people to realize motion reproduction, and the hardware device can be a mobile phone as described in fig. 23.
An application of an image file format, characterized by: there is shooting module software, and the shooting module includes: the system comprises a camera shooting acquisition unit, a sensor information acquisition unit, an address positioning information acquisition unit and an information processing unit, wherein the sensor information acquisition unit is responsible for acquiring information acquired by a sensor, the address positioning information acquisition unit is responsible for acquiring address positioning information, and the information processing unit is responsible for assigning the sensor information acquired by the sensor information acquisition unit and the address positioning information acquired by the address positioning information acquisition unit to an image file shot by the camera shooting acquisition unit.
The shooting module is connected with the client software.
Fig. 12 is a flowchart of a self-checking procedure of the photographing module according to the present invention. 1201 start; 1202, detecting by a camera shooting unit, namely searching and initializing the camera shooting unit; 1203 starting a camera shooting unit; 1204, acquiring parameters of the camera unit, namely acquiring equipment attributes such as focal length of the camera unit for standby; 1205 sensor 1 detects, i.e., looks for and initializes sensor 1, such as a direction sensor; 1206 sensor 1 is activated; 1207, detecting by the sensor n, namely sequentially searching and initializing all sensors until the sensor n; 1208 sensor n is started; 1209GPS detection, namely finding and initializing GPS equipment; 1210GPS starting; a 1211LBS unit detects, i.e., detects, location information contained in a base station signal; 1212LBS unit starts, namely, establishes channel for collecting base station signal positioning information; 1213 ready for shooting.
FIG. 13 is a flow chart of the photographing module of the present invention. 1301, shooting; 1302, acquiring a shot image, namely acquiring an original image file of an object; 1303 obtain address location information, that is, obtain address location information through GPS or LBS, which may be latitude and longitude information, or base station signal phase difference and corresponding base station code, and analyze the base station signal phase difference and corresponding base station code by the information processing center to obtain geographical location information; 1304 sensor information is obtained, i.e. all activated sensor information is obtained, such as: direction information, horizontal inclination angle information, acceleration value information, light intensity information, noise value information, temperature information, air pressure information, humidity information, environmental gas information and environmental dust information; 1305, acquiring time, that is, the time when the shot image is acquired, may be local time, network standard time, or time stamp calibration; 1306 generate an image file containing sensor information.
Fig. 14 is a schematic diagram of the posture of the photographing device and the motion-sensing reproduction system according to the present invention. 1401 is a camera or a motion-sensitive reproduction device, and for the camera, sensors for posture data acquisition are: the direction sensor and the horizontal tilt angle sensor (two mutually perpendicular horizontal tilt angle sensors) can be generally realized by adopting a gyroscope sensor, the dynamic acceleration can adopt an acceleration sensor to acquire data, and the acceleration vector is recorded and transmitted by component values of coordinates x, y and z.
The dynamic acceleration can also be obtained by differentiating a function of the attitude data changing along with the time, namely, the differential calculation of the attitude function.
For the motion-sensitive reproduction device, there are 9 degrees of freedom, i.e., front and back in the y direction, left and right in the x direction, up and down in the z direction, y-axis rotation, x-axis rotation, and z-axis rotation.
Fig. 15 is a schematic diagram of a motion-based reproduction system of the present invention. 1501 image source, namely the image shot by the shooting device of the invention, the image file contains direction information, two vertical horizontal dip angle information, and can also contain acceleration information; 1502 image information extraction, namely extracting 1503X horizontal inclination angle value, 1504Y horizontal inclination angle value, 1505 direction angle value, 1506 acceleration X component ax, 1507 acceleration Y component ay, 1508 acceleration Z component az in the image file; 1509 converting the coordinate, namely processing the X horizontal inclination angle value 1503, the Y horizontal inclination angle value 1504 and the direction angle value 1505 into the rotation coordinate of the dynamic reproduction system; 1514 attitude servo units including a 1515X rotational position servo unit, a 1516Y rotational position servo unit, a 1517Z rotational position servo unit; 1510 acceleration servo units include 1511X acceleration servo unit, 1512Y acceleration servo unit, 1513Z acceleration servo unit.
The purpose of the calculation process is a method for restoring the dynamic simulation quantity according to the motion information quantity or the function, namely a dynamic calculation unit.
FIG. 16 is a schematic diagram of a motion-sensing reproduction system for obtaining acceleration values by attitude function differential calculation according to the present invention. 1601, an image source, namely an image shot by the shooting device, wherein an image file comprises direction information and two vertical horizontal dip angle information; 1602, extracting image information, namely extracting 1603X horizontal inclination angle value, 1604Y horizontal inclination angle value and 1605 direction angle value; 1606 coordinate conversion, i.e. conversion to 1607 rotation angle values, including X, Y, Z rotation value component; the 1608 attitude servo unit includes X, Y, Z rotational position servo units, respectively driven by X, Y, Z rotational value components; 1609 calculating rotation value differential, which means to get X, Y, Z rotation differential value by differentiating the function of rotation value changing with time; 1610 rotational acceleration servo units, including X, Y, Z, are driven by the rotational differential values of X, Y, Z, respectively.
The purpose of the calculation process is a method for restoring the dynamic simulation quantity according to the motion information quantity or the function, namely a dynamic calculation unit.
Fig. 17 is a flowchart of the animation reproduction system control software of the present invention. 1701 starts; 1702, acquiring image motion information, namely acquiring direction information, two vertical horizontal inclination angle information and acceleration information in an image file; 1703, coordinate conversion, namely processing the motion information and converting the motion information into dynamic position parameters; 1704, acquiring attitude parameter values, namely acquiring spatial rotation position parameters in the dynamic position parameters, and driving 1705 an attitude servo unit; 1706 acquires an acceleration parameter value, 1707 reads the acceleration parameter, 1708 detects the acceleration, 1709 resets the acceleration servo unit if the acceleration value a =0, and 1710 outputs an acceleration servo unit pulse if the acceleration value a ≠ 0.
The servo unit of the dynamic reproduction system outputs to the execution device, and the dynamic movie devices are available at present, such as Chinese patent publications 200910304455.1 and 94206426.7.
The control signal of the servo unit of the dynamic reproduction system of the invention is output to the dynamic part of the mobile phone, thus realizing the dynamic playing of the mobile phone, and figure 23 is the hardware configuration of the mobile phone.
Fig. 18 shows the principle of calculation of the acceleration pulse time T. Considering that the displacement of the spatial position of the dynamic reproduction device is limited, for example, the duration of acceleration is relatively long in practical cases, but the dynamic reproduction device is not allowed to provide such acceleration duration, the solution is to use the pulse acceleration control, and the pulse period is T = (2 s/a)1/2And s is the maximum allowable displacement of the motion-sensitive reproduction device, a is the acceleration instant value provided by the image signal, and a reset time is provided after the completion of one pulse period.
Fig. 19 is an embodiment of the motion-sensing reproduction system of the present invention using a human body attachment. At present, dynamic movies or dynamic games are produced by simulating and reproducing dynamic scenes by using a dynamic seat, and have the defects of huge volume and high energy consumption, which are not beneficial to entering families, the dynamic reproduction of the dynamic scenes is realized by adopting a human body additional device in the embodiment, and the human body additional device is divided into two types: mechanical type and electric pulse type, wherein the mechanical type comprises a hydraulic extrusion mode, an air pressure mode, a motor mode, a linear motor mode, an electromagnetic mode, an eccentric wheel type, an electro-acoustic conversion vibration mode and the like, namely, the mechanical extrusion force is utilized to simulate the motion acceleration; the electric pulse type is that the voltage with certain frequency is utilized to stimulate the skin and acupuncture points of the human body, so that the muscles generate contraction and flutter to simulate the motion acceleration.
1901 is a right arm, 1902 is a left arm, 1903 is a torso, 1904 is a left thigh, 1905 is a left calf torso, 1906 is a right calf, and 1907 is a right thigh.
1908 it is a cushion, considering the close contact with the cushion when human sitting, so it can be arranged with dynamic parts in the cushion.
Fig. 20 is a human body attachment used in the motion-sensing reproduction system of the present invention. The present embodiment may be of an electromagnetic type, i.e. the human body attachment is composed of several electromagnets, each electromagnet being composed of a coil and a core that can freely move in the coil. The human body attachment device in the figure is composed of 4 electromagnets, the electromagnets are composed of 2001 coils and 2002 iron cores, the iron cores 2002 can move in the coils 2001, so that the coils 2001 can be electrified to cause the movement of the iron cores 2002, 2003 is a human body trunk or limbs, the movement of the iron cores 2002 is acted on the human body 2003 to generate pressure sensation, and driving signals of the electromagnets are derived from an acceleration instant value extracted from an image signal, so that the movement acceleration is simulated through pressure.
Or adopt 2005 the eccentric wheel, namely rotate and drive the eccentric wheel to move and produce the acceleration or shake through the electrical machinery.
Or a 2004 electro-acoustic vibration mode is adopted, namely an electro-acoustic conversion device, such as an electromagnetic moving coil type, a piezoelectric ceramic type, a magnetostriction type and the like, converts an electric signal into sound, and then converts the sound into vibration, or directly converts the sound into vibration, and considering that the frequency of the vibration and the frequency of the sound are always consistent in the environment, the vibration device can be replaced by an electro-acoustic device.
The electromagnet, the eccentric wheel and the electroacoustic conversion vibration device are used for generating dynamic sense, so that the electromagnet, the eccentric wheel and the electroacoustic conversion vibration device are collectively called dynamic sense parts and can be arranged in a mixed mode.
As an embodiment, the dynamic component is arranged on the cushion to form a dynamic cushion.
Fig. 21 is a schematic diagram of a device for simulating motion by electric pulse signals for the motion reproduction system of the present invention. The dynamic reproduction system of the embodiment comprises a motion parameter acquisition unit, a dynamic calculation unit, a modulator, a pulse signal generator and a pulse signal electrode pair, wherein the modulator, the pulse signal generator and the pulse signal electrode pair are provided with a plurality of groups, a common electrode is further arranged, motion parameters are provided by an image file, the dynamic calculation unit calculates an acquired motion parameter function, differential calculation or integral calculation is carried out on each motion component, a calculation result is provided for the modulator, the modulator modulates the pulse signal generator, controls the size, the frequency, the change of the size, the change of the frequency and the change of the voltage polarity of a pulse signal, and finally outputs the pulse signal to the pulse signal electrode, and the electrodes are respectively clung to the skin or acupuncture points at proper positions of a human body.
Fig. 22 is a diagram of the electrode arrangement of the device for simulating motion using electric pulse signals for the motion-sensing reproduction system of the present invention. The electrode arrangement can be arranged by a plurality of pairs at a certain angle, preferably two pairs of electrodes are vertically arranged on the skin or acupuncture points of the human body, the signals of the pulse signal electrode pairs 1 or 2 are derived from respective pulse signal generators, the pulse signal generators are controlled by a modulator, the control signals of the modulator are derived from a dynamic calculation unit, and the data of the dynamic calculation unit can be based on the acceleration X component ax of 1506, the acceleration Y component ay of 1507, the acceleration Z component az of 1508 in the scheme of FIG. 15; or the data of the dynamic calculation unit is calculated based on the rotation value differentiation 1609 in the scheme of fig. 16: an X rotational differential value, a Y rotational differential value, and a Z rotational differential value.
Fig. 23 is an embodiment of the hand-held device of the present invention for realizing simulated motion. 2301 are electromagnets, which can be arranged in multiple groups, respectively in X, Y, Z directions; 2302 is an eccentric wheel, and multiple groups of eccentric wheels can be arranged, wherein the directions are X, Y, Z respectively; 2303 electric pulse signal electrodes, which can be arranged in multiple groups; 2304 is a main body of a handheld device, which is: electronic devices such as mobile phones, mobile terminals, tablet computers, cameras and the like which can be held by hand; 2305 is a camera, such as a cell phone camera; 2306 is a hardware circuit including input and output interfaces, including hardware configuration for software operation, and an electric pulse generator and a modulator if electric pulse electrodes are provided; 2307 sensors, i.e., motion sensors, such as gyroscope, acceleration, tilt, and direction sensors, for collecting motion information; 2308 touching the screen; 2309 fingers, fingers controlling the touch screen; the input and output interfaces of the hardware circuit 2306 are respectively connected with the eccentric wheel 2302, the electromagnet 2301, the sensor 2307, the touch screen 2308, the electric pulse electrode 2303 and the camera 2305.
The figure shows a pair of identical hand-held devices transmitting information via the internet, which is a schematic view of one device transmitting a dynamic image to the other. The following description takes a mobile phone as an example, 2304 is a mobile phone body, which includes a general configuration of the mobile phone; 2302 is an eccentric wheel to provide vibration, and the current mobile phone has the configuration and is only used for outputting signals, which is different from the dynamic sense provided by the present invention, wherein the signal is only a value, and the dynamic sense is a variable analog value, and is realized by configuring a hardware servo unit or a software servo unit.
At present, a mobile phone is provided with a camera device, a motion sensing device (a dynamic sensor) and an eccentric wheel device, and the vibration generated by the eccentric wheel can simply simulate the dynamic sense during shooting or the dynamic sense of scenario editing.
The electromagnet or the eccentric wheel is a dynamic part, and the dynamic part can generate action.
A mobile phone is characterized in that: the video playing device comprises a dynamic part, and the motion of the dynamic part is controlled by motion information contained in a played image file.
A mobile phone is characterized in that: the device comprises an electric pulse electrode, wherein the electric pulse electrode is connected with an electric pulse generator, the electric pulse generator is connected with a modulator, and the modulator acquires a control signal.
A mobile phone software is characterized in that: and shooting an image, acquiring information of the dynamic sensor and instantly writing the information into an image file.
A mobile phone software is characterized in that: when the image is played, the information collected by the dynamic sensor in the image is extracted, and the information collected by the dynamic sensor controls the action of the dynamic part.
When the mobile phone shoots an image: the dynamic information such as acceleration values acquired by a dynamic sensor such as a gyroscope sensor is instantly written into a shot image to form an image file containing the acceleration information.
When the mobile phone plays the image: on one hand, the image file is played normally, on the other hand, acceleration dynamic information such as an acceleration value contained in the image file is extracted, and the acceleration value drives a mobile phone dynamic part such as an eccentric wheel through a servo unit.
The servo unit is used for quantitatively reacting to output quantity according to input quantity, a hardware device for controlling the voltage magnitude and the voltage polarity is not arranged in the existing mobile phone vibration output, and the servo can be realized by controlling the duty ratio and the duration time of output voltage through software, namely the servo unit is simulated.

Claims (6)

1. A method for generating an image file format is characterized in that:
the system is provided with a camera device, at least one sensor is connected with the camera device, and information values acquired by the sensor are written into image files shot by the camera device in real time;
the sensor is one or a combination of the following sensors:
the device comprises a magnetic sensor, a horizontal sensor, a gravity sensor, an acceleration sensor, a light intensity sensor, a distance sensor, an inclination angle sensor, a temperature sensor, a humidity sensor, an air pressure sensor, a noise value sensor, a gas sensor, a dust detection sensor, a gyroscope sensor, a vibration sensor and a GPS sensor;
writing the information value acquired by the sensor into the information item of the picture image file; or writing the information value collected by the sensor into a file header, an index block, a data block, a frame or a frame gap of the video image file;
the information value collected by the sensor at least comprises one or the combination of the following information: direction information, horizontal inclination angle information, acceleration value information, light intensity information, noise value information, temperature information, air pressure information, humidity information, environmental gas information, environmental dust information, vibration information and coordinate position information;
geographical position information or address positioning information is also written in the image file;
the method comprises the following steps: (1) collecting network mass images; (2) selecting an address range, namely setting an address range, and setting an address range value by using a longitude and latitude value; (3) filtering the address range, namely screening the collected images according to the set longitude and latitude values; (4) extracting image information, namely extracting an address information value, a direction angle value and a horizontal inclination angle value in an image file, and further extracting a focal length value and a distance value as reference values; (5) the method comprises the following steps of (1) selecting an attention point, namely setting an attention point position coordinate, and forming a group of cone-shaped visual direction shooting coverage form diagrams through selection of a direction angle value and a horizontal inclination angle value after the attention point position coordinate is determined, wherein a cone-shaped horn mouth covers the attention point; (6) removing attention points for filtering, filtering a group of formed conical visual direction shooting coverage form graphs, and filtering images of which the attention points do not fall in a conical visual coverage range, namely filtering images of which the attention points do not fall in a conical bell mouth coverage range; (7) forming a point-of-interest image set; (8) carrying out time sequencing on the attention point image set to form an animation change image;
the dynamic part is arranged to realize dynamic reproduction, the dynamic part comprises a human body additional device or a cushion or a handheld device, the dynamic part comprises a mechanical type and an electric pulse type, and the mechanical type is one or a combination of the following forms: hydraulic extrusion mode, air pressure mode, motor mode, linear motor mode, electromagnetic mode, eccentric wheel mode, electro-acoustic conversion vibration mode, namely, mechanical extrusion force is utilized to simulate motion acceleration; the electric pulse type uses the voltage with certain frequency to stimulate the skin and the acupuncture points of the human body, so that the muscles generate contraction and flutter simulation motion acceleration, and the operation steps of the dynamic reappearing device are as follows:
(s 1) acquiring image motion information, namely acquiring direction information, horizontal inclination angle information and acceleration information in the image file;
(s 2) coordinate transformation, namely, processing the motion information and transforming the motion information into dynamic position parameters;
(s 3) acquiring the attitude parameter value, namely acquiring the space rotation position parameter in the dynamic position parameter, and driving the attitude servo unit;
(s 4) acquiring an acceleration parameter value, detecting the acceleration, resetting the acceleration servo unit if the acceleration value a =0, and outputting a pulse of the acceleration servo unit if the acceleration value a ≠ 0;
further comprising the steps of:
firstly, extracting image information, namely extracting an X horizontal inclination angle value, a Y horizontal inclination angle value and a direction angle value;
coordinate conversion, namely converting into a rotation angle value, comprising X, Y, Z rotation value components;
the attitude servo unit comprises X, Y, Z rotary position servo units which are respectively driven by X, Y, Z rotary value components; fourthly, rotation value differential calculation, namely, derivation of a function of the rotation value changing along with time to obtain X, Y, Z rotation differential values;
and driving a rotation acceleration servo unit comprising X, Y, Z rotation acceleration servo units, which are respectively driven by X, Y, Z rotation differential values.
2. A method for generating an image file format according to claim 1, wherein:
the human body additional device consists of at least one electromagnet, the electromagnet consists of a coil and an iron core which can freely move in the coil, the iron core moves to act on the human body to generate pressure feeling, and a driving signal of the electromagnet is derived from an acceleration instant value extracted from an image signal, so that the movement acceleration is simulated through pressure;
or an eccentric wheel is adopted, and the motor rotates to drive the eccentric wheel to move to generate acceleration or vibration;
or an electroacoustic conversion vibration mode is adopted, and the electroacoustic conversion device comprises an electromagnetic moving coil type, a ceramic piezoelectric type or a magnetostriction type.
3. A method for generating an image file format according to claim 1, wherein:
the electric pulse type sensory reproduction system is composed of a motion parameter acquisition unit, a dynamic calculation unit, a modulator, a pulse signal generator and a pulse signal electrode pair, wherein the modulator, the pulse signal generator and the pulse signal electrode pair are provided with a plurality of groups, a public electrode is additionally arranged, motion parameters are provided by an image file, the dynamic calculation unit calculates the acquired motion parameters, differential calculation or integral calculation is carried out on each motion component, a calculation result is provided for the modulator, the modulator modulates the pulse signal generator, controls the size, the frequency, the change of the size, the change of the frequency and the change of the voltage polarity of a pulse signal, and finally outputs the pulse signal to the pulse signal electrode, and the electrodes are respectively clung to the skin or acupuncture points at proper positions of a human body.
4. A method of generating an image file format according to claim 1, 2 or 3, wherein: the address positioning information is acquired by an address camera, a first image camera and a second image camera are arranged, the first image camera and the second image camera are used for shooting images, the address camera is arranged, the address camera is used for determining the geographic positions of the first image camera and the second image camera, the geographic positions are longitude and latitude information or relative position coordinates relative to a certain reference point, and the method comprises the following steps: (A) The image camera shoots images, and the address camera determines the geographic position of the image camera; (B) the image camera transmits the image file to the information processing unit, and the address camera transmits the geographic position information to the information processing unit; (C) an image file containing geographical location information is generated by an information processing unit.
5. A method of generating an image file format according to claim 1, 2 or 3, wherein: the image file comprises direction information, horizontal dip angle information and geographical position information, and the three-dimensional coordinates of the image pixel points are calculated through trigonometric function relation during post-processing of the image file.
6. A method of generating an image file format according to claim 1, 2 or 3, comprising the steps of: (a) acquiring at least two image files shot at different positions of the same object, wherein the image files comprise direction information, horizontal inclination angle information and geographical position information during shooting; (b) transmitting the at least two image files to an information processing center, wherein the information processing center extracts direction information, horizontal inclination angle information and geographical position information in the image files; (c) the information processing center picks up image pixels in the two image files, checks the consistency of the two images, picks up pixel points of target points in the object respectively corresponding to the two images, determines the correspondence of the two pixel points, or calibrates key target points of the object; (d) calculating the three-dimensional coordinates of the target point in the object according to the direction information, the horizontal inclination angle information and the geographical position information extracted in the step (b) and the corresponding pixel points in the two images of the target point in the object picked up in the step (c) through a trigonometric function relationship; (e) calculating all pixel points in the object to obtain three-dimensional coordinates of all pixel points of the object, or calculating a key object point of the object to obtain three-dimensional coordinates of the key object point of the object, and then calculating by simulation calculation or interpolation to obtain three-dimensional coordinates of other pixel points of the object; (f) and generating a three-dimensional image or a three-dimensional contour image of the object according to the three-dimensional coordinates of the pixel points of the object.
CN201811353447.1A 2012-09-16 2012-09-16 Image file format, image file generating method, image file generating device and application Active CN109522280B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811353447.1A CN109522280B (en) 2012-09-16 2012-09-16 Image file format, image file generating method, image file generating device and application

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210341192.3A CN102867055B (en) 2012-09-16 2012-09-16 A kind of image file format and generation method and device and application
CN201811353447.1A CN109522280B (en) 2012-09-16 2012-09-16 Image file format, image file generating method, image file generating device and application

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201210341192.3A Division CN102867055B (en) 2012-09-16 2012-09-16 A kind of image file format and generation method and device and application

Publications (2)

Publication Number Publication Date
CN109522280A CN109522280A (en) 2019-03-26
CN109522280B true CN109522280B (en) 2022-04-19

Family

ID=47445924

Family Applications (5)

Application Number Title Priority Date Filing Date
CN201811352867.8A Active CN109542849B (en) 2012-09-16 2012-09-16 Image file format, image file generating method, image file generating device and application
CN201811353448.6A Active CN109284264B (en) 2012-09-16 2012-09-16 Image file format, image file generating method, image file generating device and application
CN201210341192.3A Active CN102867055B (en) 2012-09-16 2012-09-16 A kind of image file format and generation method and device and application
CN201811353447.1A Active CN109522280B (en) 2012-09-16 2012-09-16 Image file format, image file generating method, image file generating device and application
CN201811352866.3A Active CN109471842B (en) 2012-09-16 2012-09-16 Image file format, image file generating method, image file generating device and application

Family Applications Before (3)

Application Number Title Priority Date Filing Date
CN201811352867.8A Active CN109542849B (en) 2012-09-16 2012-09-16 Image file format, image file generating method, image file generating device and application
CN201811353448.6A Active CN109284264B (en) 2012-09-16 2012-09-16 Image file format, image file generating method, image file generating device and application
CN201210341192.3A Active CN102867055B (en) 2012-09-16 2012-09-16 A kind of image file format and generation method and device and application

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811352866.3A Active CN109471842B (en) 2012-09-16 2012-09-16 Image file format, image file generating method, image file generating device and application

Country Status (1)

Country Link
CN (5) CN109542849B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104637082B (en) * 2013-11-14 2018-08-10 联想(北京)有限公司 A kind of method and device of information processing
CN103885465A (en) * 2014-04-02 2014-06-25 中国电影器材有限责任公司 Method for generating dynamic data of dynamic seat based on video processing
CN103942331B (en) * 2014-04-30 2017-06-09 中南大学 A kind of automatic mode of Land_use change vector database incremental update treatment
CN104284240B (en) * 2014-09-17 2018-02-02 小米科技有限责任公司 Video browsing approach and device
US9799376B2 (en) 2014-09-17 2017-10-24 Xiaomi Inc. Method and device for video browsing based on keyframe
CN104394451B (en) * 2014-12-05 2018-09-07 宁波菊风系统软件有限公司 A kind of video presentation method of intelligent mobile terminal
CN105163056A (en) * 2015-07-08 2015-12-16 成都西可科技有限公司 Video recording method capable of synchronously merging information of barometer and positioning information into video in real time
CN105208344B (en) * 2015-09-28 2018-02-06 中国水稻研究所 Distributed shifting agriculture disease and insect information collection and diagnostic system and embedded type camera
FR3048843A1 (en) * 2016-03-09 2017-09-15 Parrot Drones METHOD FOR ENCODING AND DECODING A VIDEO AND ASSOCIATED DEVICES
CN106231198B (en) * 2016-08-17 2019-03-22 北京小米移动软件有限公司 Shoot the method and device of image
CN108076279B (en) * 2016-11-11 2020-04-24 成都康烨科技有限公司 Camera sensing data writing method and device and camera
CN108663677A (en) * 2018-03-29 2018-10-16 上海智瞳通科技有限公司 A kind of method that multisensor depth integration improves target detection capabilities
US11320667B2 (en) * 2019-09-27 2022-05-03 Snap Inc. Automated video capture and composition system
CN111583348B (en) * 2020-05-09 2024-03-29 维沃移动通信有限公司 Image data encoding method and device, image data displaying method and device and electronic equipment
CN115060323B (en) * 2022-06-28 2023-09-08 南京师大环境科技研究院有限公司 Smart city environment influence assessment device and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510311A (en) * 2009-03-05 2009-08-19 浙江大学 Method for rapidly sorting a large amount of building side elevation images based on GPS information
CN101877753A (en) * 2009-05-01 2010-11-03 索尼公司 Image processing equipment, image processing method and program
US20110216160A1 (en) * 2009-09-08 2011-09-08 Jean-Philippe Martin System and method for creating pseudo holographic displays on viewer position aware devices
CN102647538A (en) * 2011-02-20 2012-08-22 联发科技股份有限公司 Image processing method and image processing apparatus

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4672094B2 (en) * 1999-01-22 2011-04-20 ソニー株式会社 Image processing apparatus and method, and recording medium
US8561095B2 (en) * 2001-11-13 2013-10-15 Koninklijke Philips N.V. Affective television monitoring and control in response to physiological data
JP4460447B2 (en) * 2002-06-28 2010-05-12 ノキア コーポレイション Information terminal
US20050122067A1 (en) * 2003-09-03 2005-06-09 Monster, Llc Action seating
US7480382B2 (en) * 2003-09-30 2009-01-20 Microsoft Corporation Image file container
DE102004007049A1 (en) * 2004-02-13 2005-09-01 Robert Bosch Gmbh Method for classifying an object with a stereo camera
CN101150647A (en) * 2006-09-19 2008-03-26 夏普株式会社 Image processing device, image forming device and image processing system
KR100843094B1 (en) * 2006-10-30 2008-07-02 삼성전자주식회사 Apparatus and method for managing image file
KR101364534B1 (en) * 2006-11-16 2014-02-18 삼성전자주식회사 System for inputting position information in image and method thereof
JP4930302B2 (en) * 2007-09-14 2012-05-16 ソニー株式会社 Imaging apparatus, control method thereof, and program
CN100561120C (en) * 2008-02-01 2009-11-18 黑龙江科技学院 A kind of formation method of three-dimension measuring system
CN201168449Y (en) * 2008-04-02 2008-12-24 宁波新文三维股份有限公司 Interactive movie theatre
CN101271526B (en) * 2008-04-22 2010-05-12 深圳先进技术研究院 Method for object automatic recognition and three-dimensional reconstruction in image processing
KR101763132B1 (en) * 2008-08-19 2017-07-31 디지맥 코포레이션 Methods and systems for content processing
CN101370088A (en) * 2008-10-14 2009-02-18 西安宏源视讯设备有限责任公司 Scene matching apparatus and method for virtual studio
KR101185589B1 (en) * 2008-11-14 2012-09-24 (주)마이크로인피니티 Method and Device for inputing user's commands based on motion sensing
CN201370970Y (en) * 2009-02-04 2009-12-30 智崴资讯科技股份有限公司 Dynamic emulator
CN101707734A (en) * 2009-05-25 2010-05-12 南京师范大学 Adaptive acquiring and sending device for mobile time-space positioning video-audio data
CN101800834A (en) * 2010-04-13 2010-08-11 美新半导体(无锡)有限公司 Device and method for compensating exchangeable image file information of digital image
CN101813453B (en) * 2010-04-14 2011-12-14 中国人民解放军军事交通学院 Dynamic inclination detecting device for automotive dynamic driving simulator and method thereof
CN101887412A (en) * 2010-06-22 2010-11-17 华为终端有限公司 File generation method and file generation device
CN101882032B (en) * 2010-07-02 2013-04-17 廖明忠 Handwriting input method, device and system and receiver
CN102256154A (en) * 2011-07-28 2011-11-23 中国科学院自动化研究所 Method and system for positioning and playing three-dimensional panoramic video
CN102445701A (en) * 2011-09-02 2012-05-09 无锡智感星际科技有限公司 Method for demarcating image position based on direction sensor and geomagnetism sensor
CN102580328B (en) * 2012-01-10 2014-02-19 上海恒润数码影像科技有限公司 Control device of 4D (four-dimensional) audio and video all-in-one machine and control method of 4D audio and video all-in-one machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510311A (en) * 2009-03-05 2009-08-19 浙江大学 Method for rapidly sorting a large amount of building side elevation images based on GPS information
CN101877753A (en) * 2009-05-01 2010-11-03 索尼公司 Image processing equipment, image processing method and program
US20110216160A1 (en) * 2009-09-08 2011-09-08 Jean-Philippe Martin System and method for creating pseudo holographic displays on viewer position aware devices
CN102647538A (en) * 2011-02-20 2012-08-22 联发科技股份有限公司 Image processing method and image processing apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
车载式城市信息采集与三维建模系统;卢秀山等;《武汉大学学报》;20030630;第36卷(第3期);第I138-144页 *
高质量城市街景图像采集系统与应用;吴智宁;《中国优秀硕士学位论文全文数据库 信息科技辑》;20110215;第76-80页 *

Also Published As

Publication number Publication date
CN109522280A (en) 2019-03-26
CN109284264B (en) 2021-10-29
CN109542849B (en) 2021-09-24
CN102867055B (en) 2019-01-25
CN109471842A (en) 2019-03-15
CN109471842B (en) 2021-11-23
CN102867055A (en) 2013-01-09
CN109542849A (en) 2019-03-29
CN109284264A (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN109522280B (en) Image file format, image file generating method, image file generating device and application
CN109710057B (en) Method and system for dynamically reproducing virtual reality
US10380762B2 (en) Real-time remote collaboration and virtual presence using simultaneous localization and mapping to construct a 3D model and update a scene based on sparse data
US10389937B2 (en) Information processing device, information processing method, and program
CN109145788B (en) Video-based attitude data capturing method and system
CN109565571B (en) Method and device for marking attention area
CN111080759B (en) Method and device for realizing split mirror effect and related product
CN105915849A (en) Virtual reality sports event play method and system
WO2016187235A1 (en) Virtual lens simulation for video and photo cropping
CN106357966A (en) Panoramic image photographing device and panoramic image acquiring method
CN104836938A (en) Virtual studio system based on AR technology
CN101872243B (en) System and method for realizing 360-degree panoramic play following real space direction
KR20210031894A (en) Information processing device, information processing method and program
CN106162204A (en) Panoramic video generation, player method, Apparatus and system
CN105653020A (en) Time traveling method and apparatus and glasses or helmet using same
CN109328462A (en) A kind of method and device for stream video content
CN108769648A (en) A kind of 3D scene rendering methods based on 720 degree of panorama VR
CN112532963B (en) AR-based three-dimensional holographic real-time interaction system and method
WO2013041152A1 (en) Methods to command a haptic renderer from real motion data
CN105893452B (en) Method and device for presenting multimedia information
CN111192350A (en) Motion capture system and method based on 5G communication VR helmet
CN108475410B (en) Three-dimensional watermark adding method, device and terminal
CN115442519A (en) Video processing method, device and computer readable storage medium
CN111629194B (en) Method and system for converting panoramic video into 6DOF video based on neural network
CN106664362B (en) Image processing apparatus, image processing method and computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220331

Address after: 150080 floors 1-4, No. 16, Meishun street, Nangang concentration area, Harbin Economic Development Zone, Harbin, Heilongjiang Province

Applicant after: Harbin Huaqiang Electric Power Automation Engineering Co.,Ltd.

Address before: 226019 1-107, Science Park, No. 58, Chongchuan Road, Nantong City, Jiangsu Province

Applicant before: Wu Donghui

GR01 Patent grant
GR01 Patent grant