From Academic Kids
Meteorology is the scientific study of the atmosphere that focuses on weather processes and forecasting. Meteorological phenomena are observable weather events which illuminate and are explained by the science of meteorology. Those events are bound by the variables that exist in Earth's atmosphere. They are temperature, pressure, water vapor, and the gradients and interactions of each variable, and how they change in time. The majority of Earth's observed weather is located in the troposphere.
History of meteorology
Main article: timeline of meteorology
The term meteorology goes back to the book Meteorologica (dating from about 340 BC) by Aristotle, who combined observations with speculation as to the origin of celestial phenomena. The Greek word meteoron refers to things "high in the sky", that is between Earth and the realm of the stars, while logos means "study". A similar work, called "Book of Signs", was published by Theophrastus, a pupil of Aristotle. It was centered more on predicting the weather by interpreting established celestial phenomena, such as a halo around the moon, without bothering with explanations.
Further progress in the meteorological field had to wait until accurate instruments were available. Galileo constructed a thermometer in 1607, followed by Torricelli's invention of the barometer in 1643. The dependence of atmospheric pressure on height was first shown by Blaise Pascal and Ren頄escartes, this idea was also researched by Edmund Halley. The anemometer for measuring wind speed was constructed in 1667 by Robert Hooke, while Horace de Saussure completed this list of the most important meteorological instruments in 1780 with the hair hygrometer, which measures humidity.
Other technological advances that are usually thought of mainly as part of the progression of physics were Robert Boyle's investigation of the dependence of gas volume on pressure which lead to thermodynamics and Benjamin Franklin's kite experiments with lightning. Franklin was also the first American to keep accurate and detailed records of weather conditions on a daily basis, and was one of the first Americans to forecast the weather on a daily basis.
The first overall correct explanation of global circulation was the 1735 study by George Hadley of the Trade Winds, which gave rise to calling the tropical cell of zonal mean atmospheric circulation the "Hadley cell". Understanding the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Late in the 19th century the full extent of the large scale interaction of pressure gradient force and deflecting force that in the end causes air masses to move along isobars was understood. Early in the 20th century this deflecting force was named after Gaspard-Gustave Coriolis, who had published in 1835 on the energy yield of machines with rotating parts, such as waterwheels.
Synoptic weather observations were still hindered by the difficulty of establishing certain weather characteristics such as clouds or wind. These were solved when Luke Howard and Francis Beaufort introduced their systems for classifying clouds (1802) and wind speeds (1806), respectively. The real turning point however was the invention of the telegraph in 1843 that allowed exchange of weather information with unprecedented speed.
Early in the 20th century, theoretical studies of atmospheric phenomena usually were performed analytically, that is by taking the fluid-dynamical equations that govern atmospheric flow, simplifying them by neglecting lesser terms, and looking for solutions to these equations. For example, Vilhelm Bjerknes developed the model that explains the generation, intensification and ultimate decay (the life cycle) of midlatitude cyclones, introducing the idea of fronts, that is, sharply defined boundaries between air masses.
Starting in the 1950s, numerical experiments with computers became feasible. The first weather forecasts derived this way used barotropic (that means, single-vertical-level) models, and could successfully predict the large-scale movement of midlatitude Rossby waves, that is, the pattern of atmospheric lows and highs.
In the 1960s, the chaotic nature of the atmosphere was first understood by Edward Lorenz, founding the field of chaos theory. The mathematical advances achieved here later filtered back to meteorology and made it possible to describe the limits of predictability inherent in atmospheric modelling. This is known as the butterfly effect, because the growth of disturbances over time means that even one as minute as the flapping of a butterfly's wings could much later cause a large disturbance to form somewhere else.
In 1960, the launch of TIROS-1, the first successful weather satellite marked the beginning of the age where weather information is available globally. Weather satellites along with more general-purpose Earth-observing satellites circling the earth at various altitudes have become an indispensable tool for studying a wide range of phenomena from forest fires to [[El Ni.
In recent years, climate models have been developed that feature a resolution comparable to older weather prediction models. These climate models are used to investigate long-term climate shifts, such as what effects might be caused by human emission of greenhouse gases.
Meteorology and climatology: some challenges for this century
With the development of powerful new supercomputers like the Earth Simulator in Japan, mathematical modeling of the atmosphere can reach unprecedented accuracy. This is not only due to the enhanced spatial and temporal resolution of the grids employed, but also because these more powerful machines can model the Earth as an integrated climate system, where atmosphere, ocean, vegetation, and man-made influences depend on each other realistically. The goal in global meteorological modeling can thus currently be termed Earth System Modeling, with a growing number of models of various processes coupled to each other. Predictions for global effects like Global Warming and [[El Ni are expected to benefit substantially from these advancements.
Regional models are also becoming more interesting as the resolution of global models increases and with the observed increase in regional weather disasters such as the Elbe flooding in 2002 and the European heat wave in 2003. Decision makers expect from these models accurate assessments about the possible increase of these natural hazards in specific regions and countermeasures (such as dikes or areas that are intentionally flooded to decrease the flooding somewhere else) that might be effective in preventing or at least attenuating them.
For models at all scales, increased model resolution means less reliance on parameterizations, which are empirically derived expressions for processes that cannot be resolved on the model grid. For example, in mesoscale models individual clouds can now be resolved, removing the need for formulations that average over a grid box. In global modeling, atmospheric waves such as gravity waves with short temporal and spatial scales can be represented without resorting to often overly simplified parameterizations.
Possibilities for future improvements
With model output approaching observational data (e.g. from satellite soundings) in resolution, the sheer size of the datasets means that data mining and data management will become equally important considerations in meteorological computing. In light of the decrease in density of surface and rawinsonde observations, new algorithms have to be developed to extract similarly accurate information from satellite data, for example about cloud type and distribution. Data management will become more global in nature, with some central archives storing a large number of numerical experiments from various institutions. These data need to have a sufficient amount of metadata attached and can then be conveniently retrieved by a WWW interface from anywhere. These new archives will alleviate the important task of comparing experiments conducted with different models, which is instrumental for their further improvement. Also, grid computing may be an interesting way to harness the power of meteorological supercomputers more effectively. Of course international cooperation is nothing unusual in modelling, but grid computing might automate the process of running a model where the right amount of computing resources are currently available and leave scientists more time for analyzing the results.
Meteorological instrumentation that is used at the surface or in airplanes also has room for improvement. Radar and lidar show precipitation and clouds by their effects on emitted monospectral electromagnetic waves. If radar measurements can be used to accurately determine the amount of precipitation (which as of now is only possible with rain gauges), this would be beneficial for numerical weather prediction. Lidar can be used to study clouds that are so thin that they cannot be seen by the naked eye such as certain types of cirrus filaments. Researchers continue to find new atmospheric details such as high-altitude clouds that can form from contrails, which suggest that air travel may affect regional weather.
Improvements in understanding forecast uncertainty are being made by the use of ensemble forecasting (using slightly different starting conditions to make several forecasts). This was pioneered operationally in 1992 by both the European Centre for Medium Range Forecasting (ECMWF) and the US National Centers for Environmental Prediction (NCEP) which use different methods for generating each ensemble member. These techniques have allowed forecasters to understand when features in a forecast are nearly certain or rather unlikely and also allow extreme events to be picked up further in advance. This method of forecasting is still in its infancy and its use seems likely to become more advanced and widespread in the future.
targeted observations (also known as adaptive observing systems) also seem likely to reduce errors in forecasts. Calculations are made to find where extra observations would most improve a forecast for a given time and place. Then extra observations from that region can be obtained using aircraft, satellite, dropsondes or other novell observing systems to reduce the uncertainty in the analysis of that area. When these starting conditions are used to generate a forecast then errors can be reduced in the pre-defined region of interest. Experimental programs have used this technique , but it is not in operational use anywhere to date. This method has the potential to greatly reduce errors in forecasts in any numerical forecasting system.
Aside from weather and climate prediction, weather modification has been (often covertly) attempted since the 1950s —often by the military, but also at airports. But even without consideration of anecdotal evidence of trying to use weather modification as a "weapon" (such as the supposed cloud seeding by US troops during the Vietnam conflict), it is clear that unilateral weather modification may lead to political tensions. Especially in the Middle East, the possibility of wars about water supply looms for this century (Hussein's Iraq used surface engineering to block water from entering the land of the Marsh Arabs  (http://usinfo.state.gov/regional/nea/iraq/crimes/crimes3.htm)). While many of the proposed systems for modification of the water cycle belong more to the domain of engineering than to meteorology, it is clear that meteorology has taken on additional political dimensions such as the IPCC climate change mitigation proposals, and the UNFCCC pollution control limits with climate support payments from industrialized countries to developing countries.
Finally, meteorologists must educate the public more about weather and climate in general. Scientifically accurate and understandable information about topics like the ozone layer, climate change, the effects of deforestation, or sea level rise must be disseminated and misinformation by special-interest groups countered. Particularly in Europe, which may see an increase in extreme weather events as it already has in the 1990s, the population must be educated to pay closer attention to severe weather warnings or information about other detrimental health factors such as high tropospheric ozone concentration or high levels of UV radiation. Similarly, a better infrastructure to deal with natural disasters must be developed akin to similar services in the US. Political decision makers should rely on scientific assessment and properly prepare for weather events and climate effects.
Specific: Large scale wind mapping
The direction, strength, and variability of winds over various parts of the earth have been of great interest for a long time. The first large-scale wind mapping efforts were done to aid sailors who depended on winds to power their ships. In the last four decades a considerable amount of work has been done to map winds which might be used to generate power. It is difficult to map the wind because it is a chaotic system. Weather forecasting is extremely difficult to do accurately for more than a few days. However, statistical methods can be used to predict general expectations of wind. Resolution can be low, e.g. maps of an entire continent; or high, where a specific site is assesed. High resolution maps are used for example to asses the viability of a site for wind turbines.
The inputs to the mapping process are: measured wind data and topography, and possibly other meteorological data which might relate to seasonal variation, pressure, and humidity. Humidity and temperature affect the density of the air, and thus the power produced from a given wind speed. Proximity of the sea has diurnal and seasonal effects on wind patterns. The 'WAsP' model uses a simple algorithm to predict wind over simple topography. It does not solve the Navier-Stokes equations which, in principle, give the correct answers to how the speed and direction of wind changes as it flows over 'complex' terrain.A typical process for a large-scale wind map would take the wind data, and separate it into groups based on direction and speed. For each group, the wind is simulated using computer-based models, and the result is recorded. The results for each group are then averaged according to the frequency of that speed and direction 'bucket', giving the average wind over an area. This is only indicative, and before a wind farm is constructed, detailed local measurements are essential. For small areas, the Navier-Stokes equations can be solved with some success, a technique employed more often as computer power grows to match the task. With this method, the volume of air around the site of interest is cut into discrete chunks. Each is considered independently, like a pixel of a computer display. Each chunk interacts with its neighbours according to their pressure (although this can be neglected), and wind speeds. The smaller the chunk sizes, the closer the result is to reality, although large divergence is not uncommon. These models are extremely computationally expensive to run. 'Simple' terrain can be modelled using simple mass balance: if there is a hill in the way, the wind is accelerated. This is the Bernoulli Effect, and the core of the WAsP model. Wind is powered by a temperature differential. It is slowed by obstructions and is generally stronger at high altitudes. Plains have high winds because they have few obstructions. Mountain passes have high winds mostly because they funnel high-altitude winds. Some passes have winds powered by a temperature differential between the sides of the ridges. Coastal areas have high winds because water has few obstructions and because of the temperature difference between the land and the sea. Off-shore also generally has high winds for the same reasons.
Meteorological topics and phenomena
|Links to other keywords in meteorology||
Climatic or Atmospheric Patterns: Alberta clipper | [[El Ni | Derecho | Gulf Stream | [[La Ni | Jet stream | North Atlantic Oscillation | Madden-Julian oscillation | Monsoon | Pacific decadal oscillation | Pineapple Express | Scirocco | Siberian Express | Walker circulation