共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACTUnderstanding the characteristics of tourist movement is essential for tourist behavior studies since the characteristics underpin how the tourist industry management selects strategies for attraction planning to commercial product development. However, conventional tourism research methods are not either scalable or cost-efficient to discover underlying movement patterns due to the massive datasets. With advances in information and communication technology, social media platforms provide big data sets generated by millions of people from different countries, all of which can be harvested cost efficiently. This paper introduces a graph-based method to detect tourist movement patterns from Twitter data. First, collected tweets with geo-tags are cleaned to filter those not published by tourists. Second, a DBSCAN-based clustering method is adapted to construct tourist graphs consisting of the tourist attraction vertices and edges. Third, network analytical methods (e.g. betweenness centrality, Markov clustering algorithm) are applied to detect tourist movement patterns, including popular attractions, centric attractions, and popular tour routes. New York City in the United States is selected to demonstrate the utility of the proposed methodology. The detected tourist movement patterns assist business and government activities whose mission is tour product planning, transportation, and development of both shopping and accommodation centers. 相似文献
2.
E. Grafarend 《Journal of Geodesy》1970,44(1):41-49
Summary The probability to find an error vector in multiples of the Helmert-Maxwell-Boltzmann point error σ2 δij(δij Kronecker symbol) is calculated. It is found that the probability is for σ39%, for2 σ86% and for3 σ99% in two dimensions, for σ20%, for2 σ74% and for3 σ97% in three dimensions. The fundamental Maxwell-Boltzmann-distribution is tabulated0,02 (0,02) 4,50.
相似文献
3.
E. Mittermayer 《Journal of Geodesy》1972,46(2):139-157
Summary The system of normal equations for the adjustment of a free network is a singular one. Therefore, a number of coordinates
has to be fixed according to the matrix. The mean square errors and the error ellipses of such an adjustment are dependent
on this choice.
This paper gives a simple, direct method for the adjustment of free networks, where no coordinates need to be fixed. This
is done by minimizing not only the sum of the squares of the weighted errorsV
T
PV=minimun but also the Euclidean norm of the vectorX and of the covariance matrixQ X
T
X=minimum trace (Q)=minimum This last condition is crucial for geodetic problems of this type. 相似文献
4.
On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms 总被引:2,自引:1,他引:1
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model (Y−E
Y
= (X−E
X
) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E
Y
and E
X
. Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation
matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix
X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem
by using the nonlinear Euler–Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative
algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition.
For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert
cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing
the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment
of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift
to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335–342). The differences between the standard least-squares
and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion
matrix of the estimated parameters. 相似文献
5.
Summary Within potential theory of Poisson-Laplace equation the boundary value problem of physical geodesy is classified asfree andnonlinear. For solving this typical nonlinear boundary value problem four different types of nonlinear integral equations corresponding
to singular density distributions within single and double layer are presented. The characteristic problem of free boundaries,
theproblem of free surface integrals, is exactly solved bymetric continuation. Even in thelinear approximation of fundamental relations of physical geodesy the basic integral equations becomenonlinear because of the special features of free surface integrals. 相似文献
6.
Y. Kozai 《Journal of Geodesy》1968,42(3):355-357
From periodic variations of the orbital inclinations of three artificial satellites 1959Alpha 1, 1960Iota 2, and 1962Beta Mu 1 Love’s number of the earth and time lag of the bodily tide due to the friction are determined, respectively,0.29±0.03 and(10±5) minutes in time.
While the previous paper on the determination of Love’s number of the earth (Kozai, 1967) was in press, a minor error was
discovered in the Differential Orbit Improvement program(DOI) of the Smithsonian Astrophysical Observatory(SAO). Since the analysis was based on time-variations of the orbital inclinations which were derived by theDOI from precisely reduced Baker-Nunn observations, it is likely that the results in the previous paper was affected by the error
in theDOI. Therefore, the analysis is iterated by using the revisedDOI. Three satellites, 1959Alpha 1 (Vanguard 2), 1960Iota 2 (rocket ofEcho 1), and 1962Beta Mu 1 (Anna) (see Table 1) are adopted for determining Love’s number in the present paper. The satellite, 1959Eta, which was used in the previous paper, is not adopted here, since the inclination of this satellite shows irregular variations
unexplained. Instead of 1959Eta 1962Beta Mu 1 is adopted as orbital elements from precisely reduced Baker-Nunn observations have become available for a long interval of
time for this satellite. 相似文献
7.
R. H. Rapp 《Journal of Geodesy》1969,43(1):47-80
A set of2261 5°×5° mean anomalies were used alone and with satellite determined harmonic coefficients of the Smithsonian' Institution to determine
the geopotential expansion to various degrees. The basic adjustment was carried out by comparing a terrestrial anomaly to
an anomaly determined from an assumed set of coefficients. The (14, 14) solution was found to agree within ±3 m of a detailed geoid in the United States computed using1°×1° anomalies for an inner area and satellite determined anomalies in an outer area. Additional comparisons were made to the
input anomaly field to consider the accuracy of various harmonic coefficient solutions.
A by-product of this investigation was a new γE=978.0463 gals in the Potsdam system or978.0326 gals in an absolute system if −13.7 mgals is taken as the Potsdam correction. Combining this value of γE withf=1/298.25, KM=3.9860122·10
22
cm
3
/sec
2
, the consistent equatorial radius was found to be6378143 m. 相似文献
8.
J. C. Owens 《Journal of Geodesy》1968,42(3):277-291
The development of lasers, new electro-optic light modulation methods, and improved electronic techniques have made possible
significant improvements in the range and accuracy of optical distance measurements, thus providing not only improved geodetic
tools but also useful techniques for the study of other geophysical, meteorological, and astronomical problems. One of the
main limitations, at present, to the accuracy of geodetic measurements is the uncertainty in the average propagation velocity
of the radiation due to inhomogeneity of the atmosphere. Accuracies of a few parts in ten million or even better now appear
feasible, however, through the use of the dispersion method, in which simultaneous measurements of optical path length at
two widely separated wavelengths are used to determine the average refractive index over the path and hence the true geodetic
distance. The design of a new instrument based on this method, which utilizes wavelengths of6328 ? and3681 ? and3 GHz polarization modulation of the light, is summarized. Preliminary measurements over a5.3 km path with this instrument have demonstrated a sensitivity of3×10
−9
in detecting changes in optical path length for either wavelength using1-second averaging, and a standard deviation of3×10
−7
in corrected length. The principal remaining sources of error are summarized, as is progress in other laboratories using
the dispersion method or other approaches to the problem of refractivity correction. 相似文献
9.
智能导游服务已从传统的单一景区导游发展为城市多景区导游,因此,包含了景区间导游和景区内导游两大部分;表现为更加注重面向散客服务、兼容多移动智能终端以应用程序(App)的方式运行。由于景区间导游和景区内导游往往自成系统,在游客进出景区的过程中存在着两种系统的切换,且主要采用手动方式完成,尚没有实现城市多景区导游过程的无缝衔接。为了应对这个挑战,本文设计了一个面向城市多景区的无缝导游服务模式,分析了其架构以及工作机制;并着重讨论了城市无缝导游服务的关键技术,如系统间无缝切换的条件,包含判断游客进入、离开景区等;以及导游服务过程无缝衔接的机制,包含导游动态数据结构的设计,游客游览需求、导游过程状态在多导游系统间的传递等。最后通过实例验证了该方法的有效性,作为新的导游服务模式,同时为动态集成已有的导游App提供了解决方法。 相似文献
10.
O. Remmer 《Journal of Geodesy》1969,43(2):99-122
A method for filtering of geodetic observationwhich leaves the final result normally distributed, is presented. Furthermore, it is shown that if you sacrifice100.a% of all the observations you may be (1−β).100% sure that a gross error of the size Δ is rejected.
Another and, may be intuitively, more appealing method is presented; the two methods are compared and it is shown why Method
1 should be preferred to Method 2 for geodetic purposes.
Finally the two methods are demonstrated in some numerical examples. 相似文献
11.
12.
《测量评论》2013,45(83):224-230
AbstractMr. A. J. Morley has contributed a series of articles in the Review (E.S.R., iv, 23, 16; iv, 25, 136 and vi, 40, 76) on the adjustment of trigonometrical levels and the evaluation of the coefficient of terrestrial refraction with a view to ascertaining how other Colonies and Dominions deal with these problems. This object is very commendable as several problems concerning both the observational and theoretical sides arise in height determinations, regarding which there is not much guidance in the usual treatises on the subject. 相似文献
13.
Mixed Integer-Real Valued Adjustment (IRA) Problems: GPS Initial Cycle Ambiguity Resolution by Means of the LLL Algorithm 总被引:4,自引:0,他引:4
Erik W. Grafarend 《GPS Solutions》2000,4(2):31-44
In order to achieve to GPS solutions of first-order accuracy and integrity, carrier phase observations as well as pseudorange
observations have to be adjusted with respect to a linear/linearized model. Here the problem of mixed integer-real valued
parameter adjustment (IRA) is met. Indeed, integer cycle ambiguity unknowns have to be estimated and tested. At first we review
the three concepts to deal with IRA: (i) DDD or triple difference observations are produced by a properly chosen difference
operator and choice of basis, namely being free of integer-valued unknowns (ii) The real-valued unknown parameters are eliminated
by a Gauss elimination step while the remaining integer-valued unknown parameters (initial cycle ambiguities) are determined
by Quadratic Programming and (iii) a RA substitute model is firstly implemented (real-valued estimates of initial cycle ambiguities)
and secondly a minimum distance map is designed which operates on the real-valued approximation of integers with respect to
the integer data in a lattice. This is the place where the integer Gram-Schmidt orthogonalization by means of the LLL algorithm (modified LLL algorithm) is applied being illustrated by four examples. In particular, we prove
that in general it is impossible to transform an oblique base of a lattice to an orthogonal base by Gram-Schmidt orthogonalization where its matrix enties are integer. The volume preserving Gram-Schmidt orthogonalization operator constraint to integer entries produces “almost orthogonal” bases which, in turn, can be used to produce the integer-valued
unknown parameters (initial cycle ambiguities) from the LLL algorithm (modified LLL algorithm). Systematic errors generated
by “almost orthogonal” lattice bases are quantified by A. K. Lenstra et al. (1982) as well as M. Pohst (1987). The solution point of Integer Least Squares generated by the LLL algorithm is = (L')−1[L'◯] ∈ ℤ
m
where L is the lower triangular Gram-Schmidt matrix rounded to nearest integers, [L], and = [L'◯] are the nearest integers of L'◯, ◯ being the real valued approximation of z ∈ ℤ
m
, the m-dimensional lattice space Λ. Indeed due to “almost orthogonality” of the integer Gram-Schmidt procedure, the solution point is only suboptimal, only close to “least squares.” ? 2000 John Wiley & Sons, Inc. 相似文献
14.
AbstractThe proof of the attraction of a uniformly thin vertical block in the last issue (No. 24, p. 87) does not appear as satisfactory as the textbook would suggest. It is proposed to attempt here a more direct explanation, independent of substitutional expedients. 相似文献
15.
《测量评论》2013,45(54):311-314
AbstractThere has always been a marked difference of opinion on the relative merits of the methods of bearings and of angles as applied to triangulation, though it is probable that the majority of writers prefer the method of bearings for first-order work. The subject was mentioned in a recent issue of this Review (vii, 47, 19). 相似文献
16.
A relativistic delay model for Earth-based very long baseline interferometry (VLBI) observation of sources at finite distances is derived. The model directly provides the VLBI delay in the scale of terrestrial time. The effect of the curved wave front is represented by using a pseudo source vector K = (R
1 + R
2)/(R
1 + R
2), and the variation of the baseline vector due to the difference of arrival time is taken into account up to the second-order by using Halley’s method. The precision of the new VLBI delay model is 1 ps for all radio sources above 100 km altitude from the Earth’s surface in Earth-based VLBI observation. Simple correction terms (parallax effect) are obtained, which can also adopt the consensus model (e.g. International Earth Rotation and Reference Frames Service conventions) to finite-distance radio source at R > 10 pc with the same precision. The new model may enable estimation of distance to the radio source directly with VLBI delay data. 相似文献
17.
A new global TEC model for estimating transionospheric radio wave propagation errors 总被引:4,自引:0,他引:4
Space-based navigation and radar systems operating at single frequencies of <10 GHz require ionospheric corrections of the
signal delay or range error. Because this ionospheric propagation error is proportional to the total electron content of the
ionosphere along the ray path, a user friendly TEC model covering global scale and all levels of solar activity should be
helpful in various applications. Since such a model is not available yet, we present an empirical model approach that allows
determining global TEC very easily. Although the number of model coefficients and parameters is rather small, the model describes
main ionospheric features with good quality. Presented is the empirical approach describing dependencies on local time, geographic/geomagnetic
location and solar irradiance and activity. The non-linear approach needs only 12 coefficients and a few empirically fixed
parameters for describing the broad spectrum of TEC variation at all levels of solar activity. The model approach is applied
on high-quality global TEC data derived by the Center for Orbit Determination in Europe (CODE) at the University of Berne
over more than half a solar cycle (1998–2007). The model fits to these input data with a negative bias of 0.3 TECU and a RMS
deviation of 7.5 TECU. As other empirical models too, the proposed Global Neustrelitz TEC
Model NTCM-GLis climatological, i.e. the model describes the average behaviour under quiet geomagnetic conditions. During severe space
weather events the actual TEC data may deviate from the model values considerably by more than 100%. A preliminary comparison
with independent data sets as TOPEX/Poseidon altimeter data reveals similar results for NeQuick and NTCM-GL with RMS deviations
in the order of 5 and 11 TECU (1 TECU = 1016 electrons/m2) for low and high-solar activity conditions, respectively. The more extended data base of ionosphere information that accumulates
in the coming years will help in further improving the set of coefficients of the model. 相似文献
18.
AbstractThe following is a report of the discussion on the paper by Mr. A. R. Robbins on “Deviation of the Vertical” which was read at a meeting of the Land Surveying Division of the Royal Institution of Chartered Surveyors held on Tuesday, 12th December, 1950, and which was published in the January issue of this Review (xi, 79, 28–36). 相似文献
19.
An investigation was made of the behaviour of the variable
(where ρij are the discrepancies between the direct and reverse measurements of the height of consecutive bench marks and theR
ij
are their distance apart) in a partial net of the Italian high precision levelling of a total length of about1.400 km.
The methods of analysis employed were in general non-parametric individual and cumulative tests; in particular randomness,
normality and asymmetry tests were carried out. The computers employed wereIBM/7094/7040. From the results evidence was obtained of the existence of an asymmetry in respect to zero of thex
ij
confirming the well-known results given firstly by Lallemand. A new result was obtained from the tests of randomness which
put in evidence trends of the mean values of thex
ij
and explained some anomalous behaviours of the cumulative discrepancy curves. The extension of this investigation to a broader
net possibly covering other national nets would be very useful to get a deeper insight into the behaviour of the errors in
high precision levelling. Ad hoc programs for electronic computers are available to accomplish this job quickly.
Presented at the 14th International Assembly of Geodesy (Lucerne, 1967). 相似文献
20.
In satellite data analysis, one big advantage of analytical orbit integration, which cannot be overestimated, is missed in
the numerical integration approach: spectral analysis or the lumped coefficient concept may be used not only to design efficient
algorithms but overall for much better insight into the force-field determination problem. The lumped coefficient concept,
considered from a practical point of view, consists of the separation of the observation equation matrix A=BT into the product of two matrices. The matrix T is a very sparse matrix separating into small block-diagonal matrices connecting the harmonic coefficients with the lumped
coefficients. The lumped coefficients are nothing other than the amplitudes of trigonometric functions depending on three
angular orbital variables; therefore, the matrix N=B
T
B will become for a sufficient length of a data set a diagonal dominant matrix, in the case of an unlimited data string length
a strictly diagonal one. Using an analytical solution of high order, the non-linear observation equations for low–low SST range data can be transformed into a form to allow the application of the lumped concept.
They are presented here for a second-order solution together with an outline of how to proceed with data analysis in the spectral
domain in such a case. The dynamic model presented here provides not only a practical algorithm for the parameter determination
but also a simple method for an investigation of some fundamental questions, such as the determination of the range of the
subset of geopotential coefficients which can be properly determined by means of SST techniques or the definition of an optimal
orbital configuration for particular SST missions. Numerical results have already been obtained and will be published elsewhere.
Received: 15 January 1999 / Accepted: 30 November 1999 相似文献