predicts star formation rates that are lower than observational esti- mates by a factor of 2 in the innermost regions and up to a factor of 5 or so at larger radii. Interestingly, a recent work by Granato et al. (2015) also finds too low SFRs in hydrodynamical simulations of protoclusters, when comparing theoretical results with observa- tional estimates by Clements et al. (2014), and Dannerbauer et al. (2014). It should be noted that most if not all protoclusters known at z > 2 are selected as overdense regions around radio galaxies. The elevated star formation rates measured for distant radio galaxies can only be sustained for a short period of time. Therefore, radio galaxy selected protoclusters might be a special subset of protoclus- ters with very high star formation rates, which could at least in part explain the mismatch between data and **model** **predictions**.

Show more
12 Read more

The **model** is sensitive to changes of site-specific parameters like the coal properties (ultimate analysis, density, thermal conductivity, etc.). The 2-D **model** can be adapted to predict the cavity growth of a UCG generator at any site. Results obtained with the **model** after changing process parameters like the air injection rate and oxygen concentration in the oxidant were compared. It was found that the **model** is suitable to predict the cavity development over a range of process conditions. The **model** was validated using Chinchilla and Hanna II and III trial data. The error in coal consumption prediction is less than 5%.

Show more
(c) Prediction scores measure the distance between DGMs along the dimension of the model used to make predictions. Figure 3: Geometric interpretation of the prediction scores..[r]

25 Read more

In the effort to obtain low cost routine space debris observations in low Earth orbit, ESA plans to utilise the radar fa- cilities of the European Incoherent Scatter Scientific Association. First demonstration measurements were performed from 11 to 23 February 2001. In total 16 hours of radar signals were collected. Here we compare these initial mea- surements with the **predictions** of the ESA MASTER/PROOF’99 **model** in order to assess the sensitivity as well as the reliability of the data. We find that while the determination of object size needs to be reviewed, the altitude distribu- tion provides a good fit to the **model** prediction. The absolute number of objects detected in the various altitude bins indicates that the coherent integration method indeed increases the detection sensitivity when compared to incoherent integration. In the data presented here integration times from 0 . 1 to 0 . 3 s were used. As expected, orbit information cannot be obtained from the measurements if they are linked to ionospheric measurements as planned. In addition routine space debris observations provide also useful information for the validation of large-object catalogues.

Show more
We present in this paper an analytical **model** able to analyze reinforced concrete structures loaded in combined bending, axial load and shear in the frame of non linear elasticity. In this **model**, the expression adopted for the section’s stiffness matrix does not take into account a constant shearing modulus G=f(E) as in linear elasticity, but a variable shearing modulus which is a function of the shear variation using simply formula. In this part, we present a calculus **model** of reinforced concrete beams on the three dimensions (3D). This **model** of computation is then expanded to spatial structures in the second part. A computing method is then developed and applied to the calculus of some reinforced concrete beams. The comparison of the results predicted by the **model** with several experimental results show that, on the one hand, the **model** **predictions** give a good agreement with the experimental behavior in any field of the behavior (after cracking, post cracking, post steel yielding and fracture of the beam).

Show more
Figure 1: Study area along the Mendocino coast in Northern California. The two sites, Abalone Point and the Ten Mile State Marine Reserve are outlined in green. Purple points indicate survey locations. ......................................................................................... 8 Figure 2: Average per-transect densities of all species or species groups observed during SCUBA surveys ................................................................................................................ 19 Figure 3: Size frequency distributions for kelp greenling between Abalone Point and the Ten Mile State Marine Reserve. ....................................................................................... 20 Figure 4: Size frequency distributions for lingcod between Abalone Point and the Ten Mile State Marine Reserve. ............................................................................................... 20 Figure 5: Canonical Correspondence Analysis (CCA) on associations between five abundant fish species observed at the study site and two categorical habitat variables: substrate and vertical relief. Black points are sampling locations, red triangles are species, and blue vectors indicate habitat variables. ...................................................................... 22 Figure 6: Generalized additive **model** response curves for kelp greenling density versus (a) (percent hard substrate) 2 and (b) fine-scale topographic position index (TPI). Solid lines = mean, dashed lines = +/- SE. Rug plot along x-axis indicates observed values used to train models................................................................................................................... 24 Figure 7: Generalized additive **model** response curves for kelp greenling biomass versus (a) (percent hard substrate) 2 and (b) fine-scale topographic position index (TPI). Solid lines = mean, dashed lines = +/- SE. Rug plot along x-axis indicates observed values used to train models................................................................................................................... 24 Figure 8: Predicted kelp greenling density. (a) Map of **model** **predictions** across the study area. Grey regions indicate areas where no **predictions** were made; (b) subset of **model** **predictions** around Abalone Point; (c) subset of **model** **predictions** in the Ten Mile State Marine Reserve; (d) distribution of standard errors associated with the kelp greenling density **model**. ................................................................................................................... 26 Figure 9: Predicted kelp greenling biomass. (a) Map of **model** **predictions** across the study area. Grey regions indicate areas where no **predictions** were made; (b) subset of **model** **predictions** around Abalone Point; (c) subset of **model** **predictions** in the Ten Mile State Marine Reserve; (d) distribution of standard errors associated with the kelp

Show more
70 Read more

An additional validation study was performed in order to demonstrate that the ZGB **model**, which produced more accurate **predictions** compared to the two other models employed, in reference to the Reitz nozzle layout, is, in general capable of predicting cavitating flows. The **predictions** of the **model** were compared against the experimental data provided by Sou et al. (2007) who visualized the development of cavitating flow in a two-dimensional throttle, as well as the topology of the spray downstream the throttle outlet. Water at a temperature T=333K was considered as the working medium for the test case used for the validation of the **model** **predictions**. As depicted in Fig. 4a, the **predictions** of the ZGB **model** regarding the length of the emerging cavity (non-dimensionalized with the throttle length) are in close agreement with the experimental data and the steep increase of the cavity length, as the Reynolds number increases, is well captured.

Show more
47 Read more

Two results emerged from this simulation that are worth reporting. First, prediction error (quantified as RMSE) was similar across the 19 weight-computing approaches, with a few noticeably poor exceptions (the two MBMC approaches, minimal variance and the cos-squared scheme: Fig. 6), and most were no better than those of the best nine single **model** **predictions**. Second, most averaging approaches gave some weight (w > 0.01) to ten or more models (Table 2), despite models being overlapping and partially nested, so that we have actually only five (more or less) independent models (those containing only one predictor: m2, m3, m5, m9 and intercept-only m1). In real data sets, such spreading of weight is the result of data sparseness or extreme noise, making important effects stand out less; indeed, half of our candidate models are not hugely different, i.e. within ∆AIC < 4.

Show more
64 Read more

We were able to improve **predictions** substantially when incorporating parameters other than task into the models; however, whether the **predictions** reported here are good enough to be used for exposure assessment remains to be determined. There are currently no standards to evaluate the occupational **predictions**. Svendsen et al. [25] concluded that their task-based **predictions**, which had R 2 correlations mostly below 0.2, were “inefficient” for use in epidemiological studies. Our expanded **model** R 2 values, which range from 0.22 to 0.58, are higher than these but lower than the 0.77 to 0.92 reported by van der Beek et al. [17]. In terms of error, Chen et al. [14] concluded that their prediction of whole body vibration, which had a mean relative RMS error of 11%, could be a “useful” method of exposure assessment. Similarly, Xu et al. [45] reported RMS errors of 8-12% for their **predictions**, and suggested that these models may be “practical” for use in field studies. Our expanded **model** **predictions** had relative RMS errors within this range (9-21%). An important test of our **predictions**, which we were unable to perform in the current study but should be explored in the future, is whether they are able to predict health outcomes.

Show more
14 Read more

12 Read more

In Figure 32 (a), the 36 hours time steps **predictions** of the Persistence, ARIMA, and Univariate Stack LSTM models are presented for one particular sample. The figure shows that the Univariate Stack LSTM **model** **predictions** can not follow the exact pattern for the whole forecast horizon, but have the near results to the true value at the first and last time steps. As the Persistence forecast use the values from the last 24 hours, the **predictions** are not appropriate to the next 36 hours. The ARIMA **model** has constant **predictions** for the whole forecast horizon. The first time step and the last time **predictions** of Univariate Stack LSTM from 500 samples are described in Figure 32 (b) and (c) respectively. It can be seen that the **model** has accurate **predictions** for the first time step, in turn, for the last time steps, there is no correlation between actual and predicted values of the samples. As the time series has no periodic pattern, it makes difficult for the **model** to do accurate **predictions** for the whole forecast horizon. Figure 31 (a) shows the actual values and **predictions** for all 36 hours forecast points (with light color), only first time step (with blue color), and last time step (with green color) from all samples. It can be seen that the actual values and **predictions** are in linear behavior for the first time steps from all samples but when the actual values are too high, the **model** can not predict those points accurately. As we discussed in Figure 32 (c), the scatter plot also shows that there is no correlation between actual and predicted values for the last time steps of the samples. In conclusion, the **predictions** for the last time steps spread out and in quite less linear behavior with actual values. As seen also from Figure 31 (b), there is a fast increase in the RMSE error for the further time steps.

Show more
60 Read more

Typical Model Deployment Architecture Data Data Model Predictions Results Data Warehouse Train model Deploy model Apply predictions Model • Disadvantages:. Very time consuming on large [r]

38 Read more

The fluctuations in glucose concentrations over the obser- vation period were due primarily to the response to food (Figure 4, column 1 [zero dose]). In this study, subjects were given breakfast, followed by lunch ~ 4 hours later. The initial peak in glucose corresponds to ingestion of breakfast and the subsequent peak 4 hours later to lunch. Sampling to character- ize these two events showed two blood glucose peaks at ~ 2 and 6 hours, temporally related to the ingestion of food. The **model** selected to describe this data is empirical and has no physiological meaning – the data could also have been fit with a series of spline functions or a polynomial to the placebo data.

Show more
13 Read more

in the past. This equation is based on an assumption that the value of the time series y depends on its previous values and also the values of some other attributes in the past. Fore xample, for CPI inflation prediction, the value of CPI in the future may depend on its previous values and thevalues of some other factors like total domestic product (GDP), monetary supply (M2), and soon. The purpose of multi-step prediction is to obtain **predictions** of several steps ahead into the future, y t , 1 , y t

14 Read more

Then we identify landmarks and apply landmark registration. We produce ˆ f i ∗ (t) by smoothing points in the registered time scale. We use the fitted models of coe ffi cients in the registered time scale to simulate new curves in the registered time scale f simu,i ∗ (t ∗ ). Next, we use Dirichlet simulation to create new landmarks and map our curves back to the real time scale. In the end, we add noise by using the **model** of measurement error. The first **model** that we have problems took a short cut as we marked in the red arrow. The **model** we propose takes a detour strange using registration, because registration improves the fit.

Show more
151 Read more

trading can be very volatile. Therefore, when entering a business contract that will be settled sometime in the future, forward contracts may be a better choice. In other words, forward rates serve as a hedging tool in currency trading, since they reduce the risk of sudden price movements. For example, in China there has been a significant demand growth for Malaysian pineapples, with an import rate expected to double to RM320 million annually by 2020 [7]. A small Malaysian pineapple farm decides to export their products to China for the next harvesting season. As of January 17, 2018, the exchange rate between these two countries’ currency is 1 Malaysian Ringgit (MYR) to 1.6252 Chinese Yuans (CNY) [8]. This farm could wait and sell their products, receive CNY, and change to MYR at the exchange rate marked on the harvest date, or they could lock the currency in a forward rate established now by using a forward contract. By using this latter strategy, the farm eliminates the risk of currency fluctuation. Nevertheless, if the farm owners are risk-seeking and expect that there will be a favorable exchange rate in the future, they may choose to wait and use the spot rate later on. Through the development of this project, we aim to provide suggested course of action to scenarios similar to this one based on the predicted exchange rates generated by our **model**.

Show more
51 Read more

The original conceptualisation of the nitrogen cycle in both SWAT and SWAT-G is based on the EPIC-**model** (Williams et al., 1984). However, the EPIC based SWAT **model** failed to predict N-cycle reasonably, since high deni- trification losses of up to 135 kg N ha −1 yr −1 were simulated for single HRU’s within the Dill catchment. This can be ex- plained by the conceptualisation of denitrification in SWAT (Neitsch et al., 2002). Denitrification occurs, whenever 95% of field capacity is exceeded. Since water will only percolate in the **model** if soil moisture exceeds field capacity, denitrifi- cation and nitrogen leaching are two heavily competing pro- cesses in the EPIC-based SWAT versions. Under humid cli- matic conditions, where soils are moist for extended periods of the year, the EPIC approach leads to a rapid and complete depletion of the simulated nitrate pools in each layer due to denitrification (Pohlert et al., 2005).

Show more
15 Read more

A stochastic implementation of the Multiple Mapping Conditioning (MMC) ap- proach has been applied to a turbulent jet diﬀusion ﬂame (Sandia Flame D). This implementation combines the advantages of the basic concepts of a mapping closure methodology with a probability density approach. A single reference variable has been chosen. Its evolution is described by a Markov process and then mapped to the mixture fraction space. Scalar micro-mixing is modelled by a modiﬁed “interaction by exchange with the mean” (IEM) mixing **model** where the particles mix with their -in reference space- conditionally averaged means. The formulation of the closure leads to localness of mixing in mixture fraction space and consequently improved localness in composition space. Results for mixture fraction and reactive species are in good agreement with the experimental data. The MMC methodology allows for the introduction of an additional “minor dissipation time scale” that controls the ﬂuctuations around the conditional mean. A sensitivity analysis based on the con- ditional temperature ﬂuctuations as a function of this time scale does not endorse earlier estimates for its modelling, but only relatively large dissipation time scales of the order of the integral turbulence time scale yield acceptable levels of conditional ﬂuctuations that agree with experiments. With the choice of a suitable dissipation time scale, MMC-IEM thus provides a simple mixing **model** that is capable of cap- turing extinction phenomena, and it gives improved **predictions** over conventional PDF **predictions** using simple IEM mixing models.

Show more
25 Read more

2.6.1 Reliability, a predictive distribution (PD) is reliable if it is statistically consistent with the observations, i.e. if the observations are realisations of the PD. In this paper, the reliability of the PD is evaluated using the predictive QQ-plot (Laio and Tamea 2007, Thyer et al. 2009, Renard et al. 2010), which provides visual clues to the statistical consistency between the observed discharge and the PD. Assuming an observation of discharge at time , is a realisation from the PD with cumulative distribution function , then the cumulative probability P-value, ( ) ( ), is a realisation from a uniform distribution on [0,1]. Thus, for varying from 1 to L observations in an event, the series of L P-values, one for each observation, will also be a realisation from a uniform distribution. The predictive QQ-plot is constructed by plotting the quantiles i.e. P-values against the corresponding theoretical quantiles of a uniform distribution on [0, 1]. The closer the points fall to the bisector (1:1 line), the better the agreement of the predictive distribution with the observations, and with all points falling on the line indicating a perfect agreement (Fig. 2). Deviations from the bisector indicate issues with prediction bias and predictive uncertainty. For example, if at the theoretical median, P-values are higher (lower) than the corresponding theoretical quantiles, the **model** systematically under–predicts (over–predicts) the observed data. A steep (flat) slope of the curve in the midrange (around theoretical quantiles 0.4–0.6) indicates an underestimated (overestimated) predictive uncertainty.

Show more
26 Read more

We begin with a slightly modified version of the BR **model** with two-sided altruism. 3 Every family has one parent and one child. There are two periods. In the first period, each child is a member of her parent’s household. In the second period, adult children maintain separate households from their parents. Every parent works only in the first period and supplies one unit of labor, which has a value, in efficiency units, of A. 4 Children may also work during the first period; any time they spend at work has a value, in efficiency units, of 1. Time not spent working is spent in school. Any labor income a child receives is controlled by her parent. When children become adults (in the second period) they control their own incomes. They then supply one unit of labor, which has a value in efficiency units that depends on the amount of schooling they received during the first period. As in BR, we assume the return to education is given by the function h(e), which is assumed to have the properties h ( 0 ) = 1 , h ' ( e ) > 0 , and h '' ( e ) < 0 . The single produced output good is the numeraire.

Show more
23 Read more