首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
N. Scafetta 《Climate Dynamics》2014,43(1-2):175-192
Herein I propose a multi-scale dynamical analysis to facilitate the physical interpretation of tide gauge records. The technique uses graphical diagrams. It is applied to six secular-long tide gauge records representative of the world oceans: Sydney, Pacific coast of Australia; Fremantle, Indian Ocean coast of Australia; New York City, Atlantic coast of USA; Honolulu, US state of Hawaii; San Diego, US state of California; and Venice, Mediterranean Sea, Italy. For comparison, an equivalent analysis is applied to the Pacific Decadal Oscillation (PDO) index and to the Atlantic Multidecadal Oscillation (AMO) index. Finally, a global reconstruction of sea level (Jevrejeva et al. in Geophys Res Lett 35:L08715, 2008) and a reconstruction of the North Atlantic Oscillation (NAO) index (Luterbacher et al. in Geophys Res Lett 26:2745–2748, 1999) are analyzed and compared: both sequences cover about three centuries from 1700 to 2000. The proposed methodology quickly highlights oscillations and teleconnections among the records at the decadal and multidecadal scales. At the secular time scales tide gauge records present relatively small (positive or negative) accelerations, as found in other studies (Houston and Dean in J Coast Res 27:409–417, 2011). On the contrary, from the decadal to the secular scales (up to 110-year intervals) the tide gauge accelerations oscillate significantly from positive to negative values mostly following the PDO, AMO and NAO oscillations. In particular, the influence of a large quasi 60–70 year natural oscillation is clearly demonstrated in these records. The multiscale dynamical evolutions of the rate and of the amplitude of the annual seasonal cycle of the chosen six tide gauge records are also studied.  相似文献   

2.
Measurements of carbon dioxide (CO2) concentration were made at a coastal land station, Goa, on the west coast of India from March to June 2003 as part of the ARMEX (ARabian sea Monsoon Experiment) campaign. The observations show a systematic reduction (~120?mg?m?3) of CO2 concentration during the pre-monsoon months, March–May, during which no significant change in anthropogenic emissions takes place. CO2 shoots up from 520 to 635?mg?m?3 in June with the onset of the South West monsoon. Back trajectories show that the source of air mass gradually shifts from the coastal land mass to the open southern Arabian Sea during the pre-monsoon period. The observed reduction in CO2 is explained in terms of earlier measurements in the Arabian Sea indicating maximum chlorophyll a (Sarupria and Bhargava in J Mar Sci 27:292–297, 1998) and minimum partial pressure of CO2 (Sarma in J Geophys Res 108:3225, 2003) in the sea waters off the west coast of India during the pre-monsoon period, cleaner marine air mass advection from the open sea, and negligible local vertical CO2 flux.  相似文献   

3.
We determine the parameters of the semi-empirical link between global temperature and global sea level in a wide variety of ways, using different equations, different data sets for temperature and sea level as well as different statistical techniques. We then compare projections of all these different model versions (over 30) for a moderate global warming scenario for the period 2000–2100. We find the projections are robust and are mostly within ±20% of that obtained with the method of Vermeer and Rahmstorf (Proc Natl Acad Sci USA 106:21527–21532, 2009), namely ~1 m for the given warming of 1.8°C. Lower projections are obtained only if the correction for reservoir storage is ignored and/or the sea level data set of Church and White (Surv Geophys, 2011) is used. However, the latter provides an estimate of the base temperature T 0 that conflicts with the constraints from three other data sets, in particular with proxy data showing stable sea level over the period 1400–1800. Our new best-estimate model, accounting also for groundwater pumping, is very close to the model of Vermeer and Rahmstorf (Proc Natl Acad Sci USA 106:21527–21532, 2009).  相似文献   

4.
Expert elicitation studies have become important barometers of scientific knowledge about future climate change (Morgan and Keith, Environ Sci Technol 29(10), 1995; Reilly et al., Science 293(5529):430–433, 2001; Morgan et al., Climate Change 75(1–2):195–214, 2006; Zickfeld et al., Climatic Change 82(3–4):235–265, 2007, Proc Natl Acad Sci 2010; Kriegler et al., Proc Natl Acad Sci 106(13):5041–5046, 2009). Elicitations incorporate experts’ understanding of known flaws in climate models, thus potentially providing a more comprehensive picture of uncertainty than model-driven methods. The goal of standard elicitation procedures is to determine experts’ subjective probabilities for the values of key climate variables. These methods assume that experts’ knowledge can be captured by subjective probabilities—however, foundational work in decision theory has demonstrated this need not be the case when their information is ambiguous (Ellsberg, Q J Econ 75(4):643–669, 1961). We show that existing elicitation studies may qualitatively understate the extent of experts’ uncertainty about climate change. We designed a choice experiment that allows us to empirically determine whether experts’ knowledge about climate sensitivity (the equilibrium surface warming that results from a doubling of atmospheric CO2 concentration) can be captured by subjective probabilities. Our results show that, even for this much studied and well understood quantity, a non-negligible proportion of climate scientists violate the choice axioms that must be satisfied for subjective probabilities to adequately describe their beliefs. Moreover, the cause of their violation of the axioms is the ambiguity in their knowledge. We expect these results to hold to a greater extent for less understood climate variables, calling into question the veracity of previous elicitations for these quantities. Our experimental design provides an instrument for detecting ambiguity, a valuable new source of information when linking climate science and climate policy which can help policy makers select decision tools appropriate to our true state of knowledge.  相似文献   

5.
The unit root testing within a breaking trend framework for global and hemispheric temperatures of Gay-Garcia, Estrada and Sánchez Clim Chang 94:333–349, 2009 is extended in two directions: first, the extended HadCRUT3 temperature series from Brohan et al. J Geophys Res 111:D12106, 2006 are used and, second, new breaking trend estimators and unit root tests are employed, along with direct modelling of breaking trend and unit root processes for the series. Some differences to the results of Gay-Garcia et al. are found: break dates are shifted to 1976 for global and northern hemisphere temperatures and to 1964 for the southern hemisphere. Although the results are somewhat ambiguous, global and northern hemisphere temperatures are probably best modelled by unit root processes with a break in drift, while southern hemisphere temperatures follow a breaking trend process with stationary fluctuations about this trend. Irrespective of the models selected, there is little evidence of trend warming before the breaks, i.e., until the third quarter of the 20th century, and after the breaks northern hemisphere and global trend temperatures warm quicker than in the southern hemisphere, the range being between 0.01 and 0.02 °C per annum.  相似文献   

6.
We present further steps in our analysis of the early anthropogenic hypothesis (Ruddiman, Clim Change 61:261–293, 2003) that increased levels of greenhouse gases in the current interglacial, compared to lower levels in previous interglacials, were initiated by early agricultural activities, and that these increases caused a warming of climate long before the industrial era (~1750). These steps include updating observations of greenhouse gas and climate trends from earlier interglacials, reviewing recent estimates of greenhouse gas emissions from early agriculture, and describing a simulation by a climate model with a dynamic ocean forced by the low levels of greenhouse gases typical of previous interglacials in order to gauge the magnitude of the climate change for an inferred (natural) low greenhouse gas level relative to a high present day level. We conduct two time slice (equilibrium) simulations using present day orbital forcing and two levels of greenhouse gas forcing: the estimated low (natural) levels of previous interglacials, and the high levels of the present (control). By comparing the former to the latter, we estimate how much colder the climate would be without the combined greenhouse gas forcing of the early agriculture era (inferred from differences between this interglacial and previous interglacials) and the industrial era (the period since ~1750). With the low greenhouse gas levels, the global average surface temperature is 2.7 K lower than present day—ranging from ~2 K lower in the tropics to 4–8 K lower in polar regions. These changes are large, and larger than those reported in a pre-industrial (~1750) simulation with this model, because the imposed low greenhouse gas levels (CH4 = 450 ppb, CO2 = 240 ppm) are lower than both pre-industrial (CH4 = 760 ppb, CO2 = 280 ppm) and modern control (CH4 = 1,714 ppb, CO2 = 355 ppm) values. The area of year-round snowcover is larger, as found in our previous simulations and some other modeling studies, indicating that a state of incipient glaciation would exist given the current configuration of earth’s orbit (reduced insolation in northern hemisphere summer) and the imposed low levels of greenhouse gases. We include comparisons of these snowcover maps with known locations of earlier glacial inception and with locations of twentieth century glaciers and ice caps. In two earlier studies, we used climate models consisting of atmosphere, land surface, and a shallow mixed-layer ocean (Ruddiman et al., Quat Sci Rev 25:1–10, 2005; Vavrus et al., Quat Sci Rev 27:1410–1425, 2008). Here, we replaced the mixed-layer ocean with a complete dynamic ocean. While the simulated climate of the atmosphere and the surface with this improved model configuration is similar to our earlier results (Vavrus et al., Quat Sci Rev 27:1410–1425, 2008), the added information from the full dynamical ocean is of particular interest. The global and vertically-averaged ocean temperature is 1.25 K lower, the area of sea ice is larger, and there is less upwelling in the Southern Ocean. From these results, we infer that natural ocean feedbacks could have amplified the greenhouse gas changes initiated by early agriculture and possibly account for an additional increment of CO2 increase beyond that attributed directly to early agricultural, as proposed by Ruddiman (Rev Geophys 45:RG4001, 2007). However, a full test of the early anthropogenic hypothesis will require additional observations and simulations with models that include ocean and land carbon cycles and other refinements elaborated herein.  相似文献   

7.
Heat flux density at the soil surface (G 0) was evaluated hourly on a vegetal cover 0.08 m high, with a leaf area index of 1.07 m2 m?2, during daylight hours, using Choudhury et al. (Agric For Meteorol 39:283–297, 1987) ( $ G_0^{\text{rn}} $ ), Santanello and Friedl (J Appl Meteorol 42:851–862, 2003) ( $ G_0^{\text{s}} $ ), and force-restore ( $ G_0^{\text{fr}} $ ) models and the plate calorimetry methodology ( $ G_0^{\text{pco}} $ ), where the gradient calorimetry methodology (G 0R ) served as a reference for determining G 0. It was found that the peak of G 0R was at 1 p.m., with values that ranged between 60 and 100 W m?2 and that the G 0/Rn relation varied during the day with values close to zero in the early hours of the morning and close to 0.25 in the last hours of daylight. The $ G_0^{\text{s}} $ model presented the best performance, followed by the $ G_0^{\text{rn}} $ and $ G_0^{\text{fr}} $ models. The plate calorimetry methodology showed a similar behavior to that of the gradient calorimetry referential methodology.  相似文献   

8.
Chris Hope 《Climatic change》2013,117(3):531-543
PAGE09 is an updated version of the PAGE2002 integrated assessment model (Hope 2011a). The default PAGE09 model gives a mean estimate of the social cost of CO2 (SCCO2) of $106 per tonne of CO2, compared to $81 from the PAGE2002 model used in the Stern review (Stern 2007). The increase is the net result of several improvements that have been incorporated into the PAGE09 model in response to the critical debate around the Stern review: the adoption of the A1B socio-economic scenario, rather than A2 whose population assumptions are now thought to be implausible; the use of ranges for the two components of the discount rate, rather than the single values used in the Stern review; a distribution for the climate sensitivity that is consistent with the latest estimates from IPCC 2007a; less adaptation than in PAGE2002, particularly in the economic sector, which was criticised for possibly being over-optimistic; and a more theoretically-justified basis of valuation that gives results appropriate to a representative agent from the focus region, the EU. The effect of each of these adjustments is quantified and explained.  相似文献   

9.
A new approach is proposed to predict concentration fluctuations in the framework of one-particle Lagrangian stochastic models. The approach is innovative since it allows the computation of concentration fluctuations in dispersing plumes using a Lagrangian one-particle model with micromixing but with no need for the simulating of background particles. The extension of the model for the treatment of chemically reactive plumes is also accomplished and allows the computation of plume-related chemical reactions in a Lagrangian one-particle framework separately from the background chemical reactions, accounting for the effect of concentration fluctuations on chemical reactions in a general, albeit approximate, manner. These characteristics should make the proposed approach an ideal tool for plume-in-grid calculations in chemistry transport models. The results are compared to the wind-tunnel experiments of Fackrell and Robins (J Fluid Mech, 117:1–26, 1982) for plume dispersion in a neutral boundary layer and to the measurements of Legg et al. (Boundary-Layer Meteorol, 35:277–302, 1986) for line source dispersion in and above a model canopy. Preliminary reacting plume simulations are also shown comparing the model with the experimental results of Brown and Bilger (J Fluid Mech, 312:373–407, 1996; Atmos Environ, 32:611–628, 1998) to demonstrate the feasibility of computing chemical reactions in the proposed framework.  相似文献   

10.
The air–sea $\text{ CO }_{2}$ flux was measured from a research vessel in the North Yellow Sea in October 2007 using an open-path eddy-covariance technique. In 11 out of 64 samples, the normalized spectra of scalars ( $\text{ CO }_{2}$ , water vapour, and temperature) showed similarities. However, in the remaining samples, the normalized $\text{ CO }_{2}$ spectra were observed to be greater than those of water vapour and temperature at low frequencies. In this paper, the noise due to cross-sensitivity was identified through a combination of intercomparisons among the normalized spectra of three scalars and additional analyses. Upon examination, the cross-sensitivity noise appeared to be mainly present at frequencies ${<}0.8\,\text{ Hz }$ . Our analysis also suggested that the high-frequency fluctuations of $\text{ CO }_{2}$ concentration (frequency ${>}0.8\,\text{ Hz }$ ) was probably less affected by the cross-sensitivity. To circumvent the cross-sensitivity issue, the cospectrum in the high-frequency range 0.8–1.5 Hz, instead of the whole range, was used to estimate the $\text{ CO }_{2}$ flux by taking the contribution of the high frequency to the $\text{ CO }_{2}$ flux to be the same as the contribution to the water vapour flux. The estimated air–sea $\text{ CO }_{2}$ flux in the North Yellow Sea was $-0.039\,\pm \,0.048\,\text{ mg } \text{ m }^{-2}\,\text{ s }^{-1},$ a value comparable to the estimates using the inertial dissipation method and Edson’s method (Edson et al., J Geophys Res 116:C00F10, 2011).  相似文献   

11.
Developing economy greenhouse gas emissions are growing rapidly relative to developed economy emissions (Boden et al. 2010) and developing economies as a group have greater emissions than developed economies. These developments are expected to continue (U.S. Energy Information Administration 2010), which has led some to question the effectiveness of emissions mitigation in developed economies without a commitment to extensive mitigation action from developing economies. One often heard argument against proposed U.S. legislation to limit carbon emissions to mitigate climate change is that, without participation from large developing economies like China and India, stabilizing temperature at 2 degrees Celsius above preindustrial (United Nations 2009), or even reducing global emissions levels, would be impossible (Driessen 2009; RPC Energy Facts 2009) or prohibitively expensive (Clarke et al. 2009). Here we show that significantly delayed action by rapidly developing countries is not a reason to forgo mitigation efforts in developed economies. This letter examines the effect of a scenario with no explicit international climate policy and two policy scenarios, full global action and a developing economy delay, on the probability of exceeding various global average temperature changes by 2100. This letter demonstrates that even when developing economies delay any mitigation efforts until 2050 the effect of action by developed economies will appreciably reduce the probability of more extreme levels of temperature change. This paper concludes that early carbon mitigation efforts by developed economies will considerably affect the distribution over future climate change, whether or not developing countries begin mitigation efforts in the near term.  相似文献   

12.
The Arrhenius expressions and the data plotted in Figure 2 of Rodriguez et al. 2008 give rate coefficients of approximately 2?×?10-8 cm3 molecule-1 s-1 at 255 K. Such values are approximately two orders of magnitude larger than expected from simple collision theory (Finlayson-Pitts and Pitts 1986). The rate coefficients reported at sub-ambient temperatures are substantially greater than the gas kinetic limit and are not physically plausible. The rate coefficients reported by Rodriguez et al. imply a long range attraction between the reactants which is not reasonable for reaction of neutral species such as chlorine atoms and unsaturated alcohols. We also note that the pre-exponential A factors (10-23-10-20) and activation energies (?15 kcal mol-1) are not physically plausible. We conclude that there are large systematic errors in the study by Rodriguez et al. (Atmos Chem 59:187–197, 2008).  相似文献   

13.
The high quality inventory is an important step to greenhouse gas emission mitigation. The inventory quality is estimated by means of the uncertainty analysis. The level of uncertainty depends upon the reliability of activity data and the parameters used. An attempt has been made to improve the accuracy of the estimates through a shift from production-based method (IPCC Tier 1) (IPCC 2000) to enhanced combination of production-based and mass balance methods (IPCC Tier 2) (IPCC 2006) in the estimation of emissions from operations with oil that are key in the national greenhouse gas inventory of the Russian Federation. The IPCC Tier 2 (IPCC 2006) was adapted for the national conditions. The greenhouse gas emissions were calculated for 1990 to 2009 with the use of both methods. The quantitative uncertainty assessment of the calculations was performed, and the outcomes were compared. The comparison showed that the estimates made with the use of higher tier method resulted in higher accuracy and lower uncertainties (26 % respectively compared to previously derived 54 %).  相似文献   

14.
Kleidon (2009) concludes that warm climates impose important constraints on the evolution of large brains relative to body size, confirming our previous hypothesis (Schwartzman and Middendorf 2000). Here we update the case for our hypothesis and present a first approximation estimate of the cooling required for hominin brain size increase using a simple model of heat loss. We conclude that Pleistocene glacial episodes were likely sufficient to serve as prime releasers for emergence of Homo habilis and Homo erectus. In addition, we propose that atmospheric oxygen levels may been an analogous constraint on insect encephalization.  相似文献   

15.
We evaluate the claim by Gay et al. (Clim Change 94:333–349, 2009) that “surface temperature can be better described as a trend stationary process with a one-time permanent shock” than efforts by Kaufmann et al. (Clim Change 77:249–278, 2006) to model surface temperature as a time series that contains a stochastic trend that is imparted by the time series for radiative forcing. We test this claim by comparing the in-sample forecast generated by the trend stationary model with a one-time permanent shock to the in-sample forecast generated by a cointegration/error correction model that is assumed to be stable over the 1870–2000 sample period. Results indicate that the in-sample forecast generated by the cointegration/error correction model is more accurate than the in-sample forecast generated by the trend stationary model with a one-time permanent shock. Furthermore, Monte Carlo simulations of the cointegration/error correction model generate time series for temperature that are consistent with the trend-stationary-with-a-break result generated by Gay et al. (Clim Change 94:333–349, 2009), while the time series for radiative forcing cannot be modeled as trend stationary with a one-time shock. Based on these results, we argue that modeling surface temperature as a time series that shares a stochastic trend with radiative forcing offers the possibility of greater insights regarding the potential causes of climate change and efforts to slow its progression.  相似文献   

16.
Evaluation of Two Energy Balance Closure Parametrizations   总被引:1,自引:0,他引:1  
A general lack of energy balance closure indicates that tower-based eddy-covariance (EC) measurements underestimate turbulent heat fluxes, which calls for robust correction schemes. Two parametrization approaches that can be found in the literature were tested using data from the Canadian Twin Otter research aircraft and from tower-based measurements of the German Terrestrial Environmental Observatories (TERENO) programme. Our analysis shows that the approach of Huang et al. (Boundary-Layer Meteorol 127:273–292, 2008), based on large-eddy simulation, is not applicable to typical near-surface flux measurements because it was developed for heights above the surface layer and over homogeneous terrain. The biggest shortcoming of this parametrization is that the grid resolution of the model was too coarse so that the surface layer, where EC measurements are usually made, is not properly resolved. The empirical approach of Panin and Bernhofer (Izvestiya Atmos Oceanic Phys 44:701–716, 2008) considers landscape-level roughness heterogeneities that induce secondary circulations and at least gives a qualitative estimate of the energy balance closure. However, it does not consider any feature of landscape-scale heterogeneity other than surface roughness, such as surface temperature, surface moisture or topography. The failures of both approaches might indicate that the influence of mesoscale structures is not a sufficient explanation for the energy balance closure problem. However, our analysis of different wind-direction sectors shows that the upwind landscape-scale heterogeneity indeed influences the energy balance closure determined from tower flux data. We also analyzed the aircraft measurements with respect to the partitioning of the “missing energy” between sensible and latent heat fluxes and we could confirm the assumption of scalar similarity only for Bowen ratios $\approx $ 1.  相似文献   

17.
Fifty-four broadband models for computation of solar diffuse irradiation on horizontal surface were tested in Romania (South-Eastern Europe). The input data consist of surface meteorological data, column integrated data, and data derived from satellite measurements. The testing procedure is performed in 21 stages intended to provide information about the sensitivity of the models to various sets of input data. There is no model to be ranked “the best” for all sets of input data. However, some of the models performed better than others, in the sense that they were ranked among the best for most of the testing stages. The best models for solar diffuse radiation computation are, on equal footing, ASHRAE 2005 model (ASHRAE 2005) and King model (King and Buckius, Solar Energy 22:297–301, 1979). The second best model is MAC model (Davies, Bound Layer Meteor 9:33–52, 1975). Details about the performance of each model in the 21 testing stages are found in the Electronic Supplementary Material.  相似文献   

18.
Eddy-correlation measurements of the oceanic \(\hbox {CO}_2\) flux are useful for the development and validation of air–sea gas exchange models and for analysis of the marine carbon cycle. Results from more than a decade of published work and from two recent field programs illustrate the principal interferences from water vapour and motion, demonstrating experimental approaches for improving measurement precision and accuracy. Water vapour cross-sensitivity is the greatest source of error for \(\hbox {CO}_2\) flux measurements using infrared gas analyzers, often leading to a ten-fold bias in the measured \(\hbox {CO}_2\) flux. Much of this error is not related to optical contamination, as previously supposed. While various correction schemes have been demonstrated, the use of an air dryer and closed-path analyzer is the most effective way to eliminate this interference. This approach also obviates density corrections described by Webb et al. (Q J R Meteorol 106:85–100, 1980). Signal lag and frequency response are a concern with closed-path systems, but periodic gas pulses at the inlet tip provide for precise determination of lag time and frequency attenuation. Flux attenuation corrections are shown to be \(<\) 5 % for a cavity ring-down analyzer (CRDS) and dryer with a 60-m inlet line. The estimated flux detection limit for the CRDS analyzer and dryer is a factor of ten better than for IRGAs sampling moist air. While ship-motion interference is apparent with all analyzers tested in this study, decorrelation or regression methods are effective in removing most of this bias from IRGA measurements and may also be applicable to the CRDS.  相似文献   

19.
Gary Yohe 《Climatic change》2010,99(1-2):295-302
Article 2 of the United Nations Framework Convention on Climate Change commits its parties to stabilizing greenhouse gas concentrations in the atmosphere at a level that “would prevent dangerous anthropogenic interference with the climate system.” Authors of the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC 2001a, b) offered some insight into what negotiators might consider dangerous by highlighting five “reasons for concern” (RFC’s) and tracking concern against changes in global mean temperature; they illustrated their assessments in the now iconic “burning embers” diagram. The Fourth Assessment Report reaffirmed the value of plotting RFC’s against temperature change (IPCC 2007a, b), and Smith et al. (2009) produced an unpated embers visualization for the globe. This paper applies the same assessment and communication strategies to calibrate the comparable RFC’s for the United States. It adds “National Security Concern” as a sixth RFC because many now see changes in the intensity and/or frequency of extreme events around the world as “risk enhancers” that deserve attention at the highest levels of the US policy and research communities. The US embers portrayed here suggest that: (1) US policy-makers will not discover anything really “dangerous” over the near to medium term if they consider only economic impacts that are aggregated across the entire country but that (2) they could easily uncover “dangerous anthropogenic interference with the climate system” by focusing their attention on changes in the intensities, frequencies, and regional distributions of extreme weather events driven by climate change.  相似文献   

20.
The local thermal effects in the wake of a single cube with a strong heated rear face, representing a large building in an urban area, are studied using large-eddy simulations (LES) for various degrees of heating, which are characterized by the local Richardson number, $Ri$ . New wall models are implemented for momentum and temperature and comparison of the flow and thermal fields with the wind-tunnel data of Richards et al. (J Wind Eng Ind Aerodyn 94, 621–636, 2006) shows fair agreement. Buoyancy effects are quite evident at low $Ri$ and a significant increase in the turbulence levels is observed for such flows. Apart from the comparisons with experiments, further analysis included the estimation of the thermal boundary-layer thickness and heat transfer coefficient for all $Ri$ . For sufficiently strong heating, the heat transfer coefficient at the leeward face is found to be higher than the roof surface. This suggests that, beyond a certain $Ri$ value, buoyancy forces from the former surface dominate the strong streamwise convection of the latter. Quadrant analysis along the shear layer behind the cube showed that the strength of sweeps that contribute to momentum flux is considerably enhanced by heating. The contribution of different quadrants to the heat flux is found to be very different to that of the momentum flux for lower $Ri$ .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号